[quant][core][bug fix] Corrected at::to(memory_format=...) support for quantized tensors
Summary:
Previously, at::to support for quantized tensors did not work properly,
and we had to, instead, use at::contiguous. This PR allows us to use
at::to(memory_format=...) or torch.Tensor.to(memory_format=....)
on the back- and front-ends.
Test plan:
python test/test_quantization.py -k test_qtensor_to_memory_format
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75540
Approved by: https://github.com/jerryzh168