pytorch
4666e3f1 - [quant] update fused_obs_fake_quant op to accept output_fake_quant argument (#65621)

Commit
4 years ago
[quant] update fused_obs_fake_quant op to accept output_fake_quant argument (#65621) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65621 Add a new attribute to the FusedMovingAvgObsFakeQuantize that controls if the Fake Quant operation should be applied at the output of a particular layer. The motivation is to give the users additional control to control the numerics of the fake_quant operators during training. It defaults to always fake quant the output (True). Note: We will still observer the tensors as before (only the fake_quant operation is controlled using this flag) For example ``` input model x -> fc1 -> fc2 -> non_quantizable_op -> fc3 After fake_quant x -> fake_quant(x) -> fc1 -> fake_quant(fc1) -> fc2 -> fake_quant(fc2) -> non_quantizable_op -> fake_quant() -> fc3 -> fake_quantize(fc3) With output_fake_quant disabled at the output of fc2 and fc3 (since their outputs are non-quantizable) x -> fake_quant(x) -> fc1 -> fake_quant(fc1) -> fc2 -> non_quantizable_op -> fake_quant() -> fc3 ``` Test Plan: ./buck-out/gen/caffe2/test/quantization_fx\#binary.par -r test_disable_output_fake_quant Reviewed By: jerryzh168 Differential Revision: D31174526 fbshipit-source-id: bffe776216d041fb09133a6fb09bfc2c0bb46b89
Author
Parents
Loading