fx quant: add workflow support for torch.matmul quantization (#72444)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72444
In https://github.com/pytorch/pytorch/pull/71783 support was added for
quantized matmul.
In this PR, the FX graph mode quantization workflow support for this
operator is added, for int8 dtypes.
Test Plan:
```
python test/test_quantization.py TestQuantizeFxOps.test_qmatmul
```
Imported from OSS
Reviewed By: andrewor14
Differential Revision: D34047310
fbshipit-source-id: 781219047419ce621a4deb46ea04881818bf4209
(cherry picked from commit 7e039fa3a11dfd27d4d6c55dd890ebac47e77d00)