pytorch
875ba3dd - [quant][trt] Add support for torch.addmm in TensorRT (#67537)

Commit
4 years ago
[quant][trt] Add support for torch.addmm in TensorRT (#67537) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67537 This PR adds support for quantizing torch.addmm to produce a reference quantized pattern, and also adds support in the backend_config_dict api that allows people to specify the input, weight and bias input for each input: ``` addmm_config = { "pattern": torch.addmm, "observation_type": ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT, "dtype_configs": [ weighted_op_qint8_dtype_config, ], # a map from input type to input index "input_type_to_index": { "bias": 0, "input": 1, "weight": 2, } } ``` This requires some changes in getting weight_dtype and bias_dtype in the type inference stage of prepare, which will be added in the previous PR Test Plan: ``` pytho test/fx2trt/test_quant_trt.py TestQuantizeFxTRT.test_addmm ``` Imported from OSS Reviewed By: vkuzo Differential Revision: D32014998 fbshipit-source-id: 8d96c1e8b7ebb2ab385c08a5b1e43f2d5a2cbcbe
Author
Parents
Loading