[quant][fix] MHA tensor assignment fix (#53031)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53031
During the module conversion, the weight was assigned directly to the linear layer inside the quantizable MHA. Instead the weight must be assigned to the `layer.weight`.
Test Plan:
`buck test mode/opt //caffe2/test:quantization -- test_custom_module_multi_head_attention`
```
Building: finished in 6.9 sec (100%) 7316/7316 jobs, 3 updated
Total time: 7.4 sec
More details at https://www.internalfb.com/intern/buck/build/914cb095-806e-4891-8822-e2644283f05c
Tpx test run coordinator for Facebook. See https://fburl.com/tpx for details.
Running with tpx session id: fcccbd0b-a887-4874-8455-d1cf8411be1d
Trace available for this run at /tmp/tpx-20210301-004359.492205/trace.log
Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/1688849910412609
✓ ListingSuccess: caffe2/test:quantization - main (2.440)
✓ Pass: caffe2/test:quantization - test_custom_module_multi_head_attention (quantization.test_quantized_op.TestQuantizedOps) (5.672)
Summary
Pass: 1
ListingSuccess: 1
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/1688849910412609
```
Reviewed By: raghuramank100
Differential Revision: D26720500
fbshipit-source-id: 3ba5d5df1c23cc5150c4a293d3c93c44dc702e50