[quant] update embedding module to not store qweight (#50418)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50418
previously we were storing the quantized weight as a module attribute, whcih
was resulting in the weight getting stored as part of the model.
We don't need this since we already store the unpacked weights as part of the model.
Test Plan:
Before
```
Archive: tmp.pt
Length Method Size Cmpr Date Time CRC-32 Name
-------- ------ ------- ---- ---------- ----- -------- ----
586 Stored 586 0% 00-00-1980 00:00 5fefdda0 tmp/extra/producer_info.json
1588700 Stored 1588700 0% 00-00-1980 00:00 04e0da4c tmp/data/0
63548 Stored 63548 0% 00-00-1980 00:00 0ceb1f45 tmp/data/1
63548 Stored 63548 0% 00-00-1980 00:00 517bc3ab tmp/data/2
1588700 Stored 1588700 0% 00-00-1980 00:00 dbe88c73 tmp/data/3
63548 Stored 63548 0% 00-00-1980 00:00 d8dc47c4 tmp/data/4
63548 Stored 63548 0% 00-00-1980 00:00 b9e0c20f tmp/data/5
1071 Stored 1071 0% 00-00-1980 00:00 10dc9350 tmp/data.pkl
327 Defl:N 203 38% 00-00-1980 00:00 dfddb661 tmp/code/__torch__/___torch_mangle_0.py
185 Stored 185 0% 00-00-1980 00:00 308f580b tmp/code/__torch__/___torch_mangle_0.py.debug_pkl
1730 Defl:N 515 70% 00-00-1980 00:00 aa11f799 tmp/code/__torch__/torch/nn/quantized/modules/embedding_ops.py
1468 Defl:N 636 57% 00-00-1980 00:00 779609a6 tmp/code/__torch__/torch/nn/quantized/modules/embedding_ops.py.debug_pkl
0 Stored 0 0% 00-00-1980 00:00 00000000 tmp/code/__torch__/torch/classes/quantized.py
6 Stored 6 0% 00-00-1980 00:00 816d0907 tmp/code/__torch__/torch/classes/quantized.py.debug_pkl
4 Stored 4 0% 00-00-1980 00:00 57092f6d tmp/constants.pkl
2 Stored 2 0% 00-00-1980 00:00 55679ed1 tmp/version
-------- ------- --- -------
3436971 3434800 0% 16 files
```
After
```
Archive: tmp.pt
Length Method Size Cmpr Date Time CRC-32 Name
-------- ------ ------- ---- ---------- ----- -------- ----
1588700 Stored 1588700 0% 00-00-1980 00:00 a4da6981 tmp/data/0
63548 Stored 63548 0% 00-00-1980 00:00 74d9b607 tmp/data/1
63548 Stored 63548 0% 00-00-1980 00:00 e346a0c2 tmp/data/2
952 Stored 952 0% 00-00-1980 00:00 eff8706e tmp/data.pkl
375 Defl:N 227 40% 00-00-1980 00:00 96c77b68 tmp/code/__torch__/quantization/test_quantize/___torch_mangle_23.py
228 Defl:N 162 29% 00-00-1980 00:00 6a378113 tmp/code/__torch__/quantization/test_quantize/___torch_mangle_23.py.debug_pkl
1711 Defl:N 509 70% 00-00-1980 00:00 66d8fd61 tmp/code/__torch__/torch/nn/quantized/modules/embedding_ops.py
1473 Defl:N 634 57% 00-00-1980 00:00 beb2323b tmp/code/__torch__/torch/nn/quantized/modules/embedding_ops.py.debug_pkl
0 Stored 0 0% 00-00-1980 00:00 00000000 tmp/code/__torch__/torch/classes/quantized.py
6 Stored 6 0% 00-00-1980 00:00 816d0907 tmp/code/__torch__/torch/classes/quantized.py.debug_pkl
4 Stored 4 0% 00-00-1980 00:00 57092f6d tmp/constants.pkl
2 Stored 2 0% 00-00-1980 00:00 55679ed1 tmp/version
-------- ------- --- -------
1720547 1718292 0% 12 files
```
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D25879879
fbshipit-source-id: e09427a60d4c44dd1a190575e75f3ed9cde6358f