pytorch
6686e9bc - [Quant] Add fused LinearTanh module for onednn backend (#88923)

Commit
3 years ago
[Quant] Add fused LinearTanh module for onednn backend (#88923) **Summary** This PR adds fused `QLinearTanh` module for onednn backend, which will be used for int8 inference with onednn backend. Cannot call this module with other quantization backends otherwise an error is thrown. **Test plan** python test_quantization.py TestStaticQuantizedModule Pull Request resolved: https://github.com/pytorch/pytorch/pull/88923 Approved by: https://github.com/jgong5, https://github.com/jerryzh168
Author
Committer
Parents
Loading