pytorch
54241a9c - [quant][fx] Add support for fused modules in _convert_do_not_use (#67245)

Commit
4 years ago
[quant][fx] Add support for fused modules in _convert_do_not_use (#67245) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67245 Add support for fused modules in the new convert path, including linear-relu, conv{1-3}d-relu and their qat versions, also tested with trt (conv2d-relu and linear-relu) Test Plan: ``` python test/fx2trt/test_quantize_fx.py TestQuantizeFxTRTOps.test_linear_relu_module python test/fx2trt/test_quantize_fx.py TestQuantizeFxTRTOps.test_conv_relu_module ``` Imported from OSS Reviewed By: vkuzo Differential Revision: D31919724 fbshipit-source-id: 7e5c96eba30706f7989da680aa3443159847bdfd
Author
Parents
Loading