fx quant: hook up ConvTranspose{n}d (#49717)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49717
Quantization of `ConvTranpose{n}d` is supported in Eager mode. This PR
adds the support for FX graph mode.
Note: this currenlty only works in `qnnpack` because per-channel weights
are not supported by quantized conv transpose. In a future PR we should throw
an error when someone tries to quantize a ConvTranspose model with per-channel
weight observers until this is fixed.
Test Plan:
```
python test/test_quantization.py TestQuantizeFxOps.test_conv_transpose_1d
python test/test_quantization.py TestQuantizeFxOps.test_conv_transpose_2d
```
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D25674636
fbshipit-source-id: b6948156123ed55db77e6337bea10db956215ae6