pytorch
a37caa6e - [Quant][Inductor] Enable quantization linear pattern fusion with int8_mixed_bf16 for gelu (#116004)

Commit
1 year ago
[Quant][Inductor] Enable quantization linear pattern fusion with int8_mixed_bf16 for gelu (#116004) **Summary** Enable QLinear Unary pattern for gelu with int8_mix_bf16 **Test plan** python test/inductor/test_mkldnn_pattern_matcher.py -k test_qlinear_gelu_int8_mixed_bf16 Co-authored-by: leslie-fang-intel <leslie.fang@intel.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/116004 Approved by: https://github.com/jgong5, https://github.com/leslie-fang-intel ghstack dependencies: #114853, #114854
Author
Committer
Parents
Loading