[quant][core][gpu][improvement] Converted reinterpret_cast<T *>(some_int8_tensor.data_ptr()) calls to some_int8_tensor.data_ptr<int8_t> in quantized cudnn operator files (#75980)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75980
Support for data_ptr<T> for quantized tensor was enabled in
https://github.com/pytorch/pytorch/pull/75643. Rather than using
reinterpret_cast, we can use this overload directly. The change is
currently made in /aten/src/ATen/native/quantized/cudnn.
(Note: this ignores all push blocking failures!)
Test Plan:
```
python test/test_quantization.py -k test_qlinear_cudnn
python test/test_quantization.py -k test_qconv2d_cudnn
python test/test_quantization.py -k test_qadd_relu_cudnn
```
Reviewed By: jerryzh168
Differential Revision: D35720654
Pulled By: dzdang
fbshipit-source-id: 5ba4b99f6cfaf1b482a0a3f5208c94e53cb05eba
(cherry picked from commit 92e2480fa0862261bff42761d5eab8ee0bb3b075)