pytorch
9ca41a98 - [Quant][FX] Lower QLinearLeakyReLU for onednn backend (#88668)

Commit
2 years ago
[Quant][FX] Lower QLinearLeakyReLU for onednn backend (#88668) **Summary** Add quantization mappings for `QLinearLeakyReLU` for int8 inference for onednn backend. The fusion and lowering is supported only in FX mode. **Test plan** python test_quantization.py TestQuantizeFx Pull Request resolved: https://github.com/pytorch/pytorch/pull/88668 Approved by: https://github.com/jgong5, https://github.com/jerryzh168
Author
Committer
Parents
Loading