pytorch
1d7b2945 - [quant][better-engineering][bc-breaking] Removed quant_min/quant_max from fake_quant modules

Commit
2 years ago
[quant][better-engineering][bc-breaking] Removed quant_min/quant_max from fake_quant modules Summary: FakeQuantize class has quant_min/quant_max and activation_post_process attributes, the latter of which already includes quant_min/max. As such, we can remove quant_min/quant_max from FakeQuantize and use FakeQuantize.activation_post_process.quant_m* directly. Test plan: ``` python test/test_quantization.py ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/76674 Approved by: https://github.com/vkuzo
Author
Committer
Parents
Loading