pytorch
84506e03 - fx quant: fix fq when input is quantized and node does not need fq (#49382)

Commit
4 years ago
fx quant: fix fq when input is quantized and node does not need fq (#49382) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49382 Fixes an edge case. If the input to the graph is quantized and the first node does not need activation observation, makes sure that the observer is not inserted. Test Plan: ``` python test/test_quantization.py TestQuantizeFxOps.test_int8_input_no_unnecessary_fq ``` Imported from OSS Reviewed By: jerryzh168 Differential Revision: D25551041 fbshipit-source-id: a6cba235c63ca7f6856e4128af7c1dc7fa0085ea
Author
Parents
Loading