pytorch
cec44aa5 - [quant] Add op support for linear_relu_dynamic_fp16 (#63824)

Commit
4 years ago
[quant] Add op support for linear_relu_dynamic_fp16 (#63824) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63824 Add a fused operator implementation that will work with the quantization fusion APIs. Once FBGEMM FP16 kernel supports relu fusion natively we can remove the addition from the PT operator. Test Plan: python test/test_quantization.py Imported from OSS Reviewed By: heitorschueroff Differential Revision: D30503514 fbshipit-source-id: 6bf3bd53f47ffaa3f1d178eaad8cc980a7f5258a
Author
Parents
Loading