[PyTorch] Fix quantized Conv1d module parameters (#62356)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62356
In `torch/nn/quantized/module/conv.py`, Conv1d is making a scaler `kernel_size` into a tuple with size 2 by repeating `kernel_size` value. This logic is breaking `Conv1d` because internally it's unsqueezing the input with shape N, C, L to N, C, 1, L in [`qconv.cpp`](https://github.com/pytorch/pytorch/blob/06dfaadfc6357ed909ed15c7ef79d503c49d9475/aten/src/ATen/native/quantized/cpu/qconv.cpp#L841). Applying aforementioned kernel to this input shape will produce negative output shape in [`ConvUtils.h`](https://github.com/pytorch/FBGEMM/blob/203f7ff6e07d62b042e7d755fd1f4789d978e4d1/include/fbgemm/ConvUtils.h#L118-L119), if kernel_size > 1.
Here I'm modifying the processing logic for `kernel_size` and a few other parameters, to follow the pattern of [`torch/nn/module/conv.py`](https://github.com/pytorch/pytorch/blob/aae2a3c95ee6d62e834a5e6890a12f7ecf0dd17f/torch/nn/modules/conv.py#L284-L287).
Test Plan: Rely on unit test
Reviewed By: kimishpatel
Differential Revision: D29957556
fbshipit-source-id: ae13f7ca892d60b82cfffdf972cce422ebfaae8e