pytorch
89d57f26 - [quant][pt2] Fix convert in Conv + BN + ReLU QAT fusion (#102993)

Commit
1 year ago
[quant][pt2] Fix convert in Conv + BN + ReLU QAT fusion (#102993) Summary: Previously, the QAT pattern for conv + bn + relu was not actually replaced in convert. This is because the quantized QAT pattern used in convert doesn't actually have a relu node. This commit adds this extra pattern in the convert path and the numerics now match FX's. Test Plan: python test/test_quantization.py TestQuantizePT2E.test_qat_conv_bn_relu_numerics Reviewed By: jerryzh168 Differential Revision: D46372411 Pull Request resolved: https://github.com/pytorch/pytorch/pull/102993 Approved by: https://github.com/jerryzh168
Author
Committer
Parents
Loading