pytorch
71e1992b - quantization: remove most fp16 configs from fbgemm/qnnpack

Commit
2 years ago
quantization: remove most fp16 configs from fbgemm/qnnpack Summary: The fbgemm and qnnpack backends mostly support ops with quint8 activations. Historically, the default backend config has included ops with fp16 activations for other backends. This PR keeps the old config under a different name to keep the functionality tested, and makes the default config match fbgemm/qnnpack ops. Test plan: ``` python test/test_quantization.py -k TestQuantizeFx ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/78528 Approved by: https://github.com/andrewor14
Author
Committer
Parents
Loading