Fold prepacked weight into module (#26579)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26579
Remove `linear_prepack` call and attach a module to the
parent class that contains the packed weight and bias,
this is to support serialization of the quantized model
since the packed weight and bias is not serializable and
we need to overwrite the `__getstate__` and `__setstate__`
function to be able to serialize them
Test Plan:
python test/test_jit.py
Imported from OSS
Differential Revision: D17636397
fbshipit-source-id: 3b81b6faa4413e4309453fd6acec2f0be6fd2f16