Fixing a bug in .to for qtensors so scale/zp move too (#61576)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61576
This also fixed an issue in the
empty_quantized_per_channel_affine function where specifying a device
that was different from the device of scale/zp would result in a
mismatched qtensor
Test Plan:
python test/test_quantization.py
testquantizedtensor.test_per_channel_to_device
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D29675461
fbshipit-source-id: 0e2ff20f0f581dae94ee01d3ceead2a620cd26b9