pytorch
e8be6d81 - [Quant][FX] Fix issue of lowering weighted functional ops with kwargs (#95865)

Commit
1 year ago
[Quant][FX] Fix issue of lowering weighted functional ops with kwargs (#95865) Fixes #95492 **Summary** This PR fixes the issue that weighted functional ops with kwargs are not lowered correctly since kwargs are ignored. These kwargs should be moved from the functional op to its cooresponding prepack op, e.g., from `F.conv2d` to `quantized.conv2d_prepack`. **Test plan** python test/test_quantization.py -k test_lowering_functional_conv_with_kwargs python test/test_quantization.py -k test_lowering_functional_conv_transpose_with_kwargs python test/test_quantization.py -k test_lowering_functional_linear_with_kwargs Pull Request resolved: https://github.com/pytorch/pytorch/pull/95865 Approved by: https://github.com/jgong5, https://github.com/supriyar
Author
Committer
Parents
Loading