pytorch
08d8f817 - [quant][fix][fx][graphmode] Fix qconfig setting for fused modules (#71254)

Commit
3 years ago
[quant][fix][fx][graphmode] Fix qconfig setting for fused modules (#71254) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/71254 when we configure linear and relu with the same qconfig, we currently have utility functions to also generate a qconfig for the fused linear relu module, but this code is not called in correct order before which resulted in unexpected behaviors. This PR fixes the issue. Please see test case for more details. (Test case is from Supriya) Test Plan: python test/test_quantization.py TestQuantizeFx.test_fused_module_qat_swap Imported from OSS Reviewed By: supriyar Differential Revision: D33558321 fbshipit-source-id: d95114dc4b77264e603c262c2da02a3de4acba69
Author
Parents
Loading