pytorch
777c12f2 - [quant] Modify APoT nonuniform quantization workflow (#80075)

Commit
2 years ago
[quant] Modify APoT nonuniform quantization workflow (#80075) ### Summary: This PR updates the design of APoT Observer, Quantizer, and Tensor to be more consistent with their uniform counterparts in the PyTorch framework. APoT Observer now calculates alpha as the max between the absolute values of the max and min values in the input tensor. APoT Quantizer is modified so its instance methods quantize_APoT and dequantize_APoT are called by their global method counterparts. APoT Tensor is modified to account for the new method definition of the `quantize_APoT` from APoT Quantizer. ### Test Plan: Run APoT Observer class unit tests with: `python pytorch/test/quantization/core/experimental/test_nonuniform_observer.py` Run APoT Quantize class unit tests with: `python pytorch/test/quantization/core/experimental/test_quantizer.py` Run APoT Tensor class unit tests with: `python pytorch/test/quantization/core/experimental/test_quantized_tensor.py` Pull Request resolved: https://github.com/pytorch/pytorch/pull/80075 Approved by: https://github.com/jerryzh168
Author
Committer
Parents
Loading