pytorch
5576c7bd - ns for fx: initial support for int8 shadows fp32 (#60419)

Commit
4 years ago
ns for fx: initial support for int8 shadows fp32 (#60419) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/60419 Adds support for NS for FX shadowed activations pass to handle int8 modules shadowing fp32 modules. The difficulty here is that in order to insert the dtype cast, we need the qparams of the input. For the current PR, we only handle the easy cases where the previous node is either a `quantize_per_tensor` or an OSS quantized module. A future PR can handle more complicated cases such as various functions. Test Plan: ``` python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_int8_shadows_fp32_simple ``` Imported from OSS Reviewed By: hx89 Differential Revision: D29280050 fbshipit-source-id: 465257c9f82a34fa91b48ae8887355c68e00edc6
Author
Parents
Loading