[ONNX] Quantization support for quantized::cat (#79826)
- Add support for quantized `cat`
- Add type annotations for helper functions
Now we can export
```python
import torchvision.models.quantization as models
from torchvision import transforms
torch_model = models.inception_v3(pretrained=True, quantize=True)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79826
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao