pytorch
0c0de542 - [quant][graphmode][fx] Guard the supported quantization type for add/mul (#52413)

Commit
3 years ago
[quant][graphmode][fx] Guard the supported quantization type for add/mul (#52413) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52413 TODO: We'll need to add this guard for other ops as well (Note: this ignores all push blocking failures!) Test Plan: python test/test_quantization.py TestQuantizeFx.test_mul_add_fp16_config Imported from OSS Reviewed By: supriyar Differential Revision: D26503348 fbshipit-source-id: 5aaba518742a516cc3521fd5f23f1a264d2973e2
Author
Parents
Loading