pytorch
8460fa57 - [quant][fx] Add an option in convert_fx to accept qconfig_dict to skip quantization (#66878)

Commit
3 years ago
[quant][fx] Add an option in convert_fx to accept qconfig_dict to skip quantization (#66878) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66878 Currently convert_fx quantizes all layers that have been prepared, depending on the prepare qconfig_dict This PR adds support to accept a variation of qconfig_dict in convert_fx that can be used to specify skip quantizing certain layers This can help with prepare/observe all operators, quantize a subset of them (based on quantization error), to avoid preparing multiple times. The qconfig_dict passed to convert_fx can only have the values set to `None`, with the keys being the same as what is allowed in the prepare qconfig_dict Test Plan: python test/test_quantization.py TestQuantizeFx.test_convert_qconfig_dict Imported from OSS Reviewed By: jerryzh168 Differential Revision: D31808247 fbshipit-source-id: a4f5dca1090f0083fc3fea14aff56924033eb24f
Author
Parents
Loading