[Quant][test] Added test to check if fp16 packing->unpacking yields the same result as to(torch.float16).to(torch.float32) (#73685)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73685
A test was added in test_quantized_op.py that checks whether the fp16 packing and subsequent unpacking of a given fp32 tensor produces the same result as to(torch.float16).to(torch.float32)
Test Plan:
in pytorch main directory, execute
```
python test/test_quantization.py TestDynamicQuantizedOps.test_pack_unpack_fp16
```
in pytorch main directory, execute
```
python test/test_quantization.py TestDynamicQuantizedOps.test_pack_unpack_fp16
```
Differential Revision:
D34599476
D34599476
Reviewed By: jerryzh168
Pulled By: dzdang
fbshipit-source-id: 5da453e5db4801dde196424282140726c8a4ef1f
(cherry picked from commit ac8910e7feb4eebf677c99f287d48915165a87bf)