pytorch
3157371b - [quant][embedding qat] Fix bug enforcing quant_min <= zero_point <= quant_max for float zeropoint (#68852)

Commit
3 years ago
[quant][embedding qat] Fix bug enforcing quant_min <= zero_point <= quant_max for float zeropoint (#68852) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68852 When using a float zero_point in FakeQuant, such as for embeddings, it does not need to be between quant_min and quant_max, as is enforced for integer zero_points. This is because float zero_points are formulated as per: ``` xq = Round(Xf * inv_scale + zero_point), Xq = Round((Xf - min) * inv_scale) ``` Test Plan: pytest test/test_quantization.py -v -k "test_fake_quant_per_channel_qparam_range" Imported from OSS Reviewed By: supriyar Differential Revision: D32645014 fbshipit-source-id: 96dc3ca6eef9cee60be6919fceef95c9f2759891
Author
Parents
Loading