pytorch
597c9f8b - fix zero_point rounding for _fake_quantize_learnable_per_channel_affine (#52290)

Commit
4 years ago
fix zero_point rounding for _fake_quantize_learnable_per_channel_affine (#52290) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52290 _fake_quantize_learnable_per_channel_affine should allow taking non-integer zero_point as input, and perform rounding and clamp before doing forward/backward. In this diff, we make _fake_quantize_learnable_per_channel_affine to round and clamp zero_point beforehand as in _fake_quantize_learnable_per_tensor_affine. ghstack-source-id: 122148099 Test Plan: `buck test mode/dev-nosan -c fbcode.platform=platform009 //caffe2/test:quantization -- test_learnable` Reviewed By: raghuramank100 Differential Revision: D26446342 fbshipit-source-id: fc9b6832fa247cc9d41265eb4fd1575a2d2ed12c
Author
Parents
Loading