pytorch
92df8706 - fx quant: move {input|output}_quantized_idxs cfg from convert to prepare (#49238)

Commit
4 years ago
fx quant: move {input|output}_quantized_idxs cfg from convert to prepare (#49238) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49238 Moves the `input_quantized_idxs` and `output_quantized_idxs` options from the convert config to the prepare config. This is done because these operations are related to placing observers, which is numerics changing during QAT. The next PR will adjust the behavior of `input_quantized_idxs` in prepare in QAT to prevent placing a fake_quant at the input if the input is marked quantized. Placing a fake_quant there can lead to numerical inaccuracies during calibration, as it would start with scale=1 and zp=0, which may be different from the quantization parameters of the incoming quantized input. Test Plan: ``` python test/test_quantization.py TestQuantizeFx ``` Imported from OSS Reviewed By: jerryzh168 Differential Revision: D25498762 fbshipit-source-id: 17ace8f803542155652b310e5539e1882ebaadc6
Author
Parents
Loading