pytorch
ef4118e4 - [Quant][FX] Lower QConvAdd2d for onednn backend (#91153)

Commit
1 year ago
[Quant][FX] Lower QConvAdd2d for onednn backend (#91153) **Summary** Add quantization mappings for QConvAdd2d for int8 inference for onednn backend. The fusion and lowering is supported only in FX mode. **Test plan** ``` python -m pytest test_quantization.py -k test_fuse_conv_bn_add_relu_onednn python -m pytest test_quantization.py -k test_fuse_conv_bn_add_relu_by_default python -m pytest test_quantization.py -k test_fuse_conv_bn_add_relu_lowering ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/91153 Approved by: https://github.com/jgong5, https://github.com/jerryzh168
Committer
Parents
Loading