pytorch
1ef77f90 - [quant][graphmode] Different rule for handling `aten::cat` (#38570)

Commit
4 years ago
[quant][graphmode] Different rule for handling `aten::cat` (#38570) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38570 We changed the rule of quantizing `aten::cat`, previously `aten::cat` is considered to be an op that should always be quantized, like `aten::conv2d`, but this is not ideal, a better way is to quantize the output of `aten::cat` depending on whether the input is quantized, if it is then we'll quantize the output, if not, then we will not quantize the output, since `aten::cat` works both on quantized and non-quantized tensor. Test Plan: Imported from OSS Differential Revision: D21600160 fbshipit-source-id: efa957e0eaa608fffefcdfefa7f442fab45605eb
Author
Parents
Loading