[quant][fx] Add lowering support for qat and fused convs (#73527)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73527
This includes:
```
torch.nn.qat.Conv2d,
torch.nn.qat.Conv3d,
torch.nn.intrinsic.qat.ConvBn1d,
torch.nn.intrinsic.qat.ConvBn2d,
torch.nn.intrinsic.qat.ConvBn3d,
torch.nn.intrinsic.qat.ConvBnReLU1d,
torch.nn.intrinsic.qat.ConvBnReLU2d,
torch.nn.intrinsic.qat.ConvBnReLU3d,
torch.nn.intrinsic.qat.ConvReLU2d,
torch.nn.intrinsic.qat.ConvReLU3d
torch.nn.intrinsic.ConvReLU1d,
torch.nn.intrinsic.ConvReLU2d,
torch.nn.intrinsic.ConvReLU3d,
```
We first produce the reference pattern and then lower the reference pattern to quantized modules
Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
Imported from OSS
Reviewed By: andrewor14
Differential Revision: D34583206
fbshipit-source-id: d298114d1906ea44c071b0eee52730dadf67fd3e
(cherry picked from commit 6498af35b5aa6104cadb68ca48dff4e443bee7d6)