pytorch
a3b505c5 - [Quant] Fix setting fixed qparams for inner LSTM ops (#95537)

Commit
1 year ago
[Quant] Fix setting fixed qparams for inner LSTM ops (#95537) Summary: The existing util function did not quantize all inner ops in the quantizable LSTM module, resulting in the error "Could not run X with arguments from the 'QuantizedCPU' backend." This commit fixes this by ensuring that all the other ops whose qparams were not specifically configured are still quantized as before, as in `torch.ao.nn.quantizable.LSTM.from_float`. Test Plan: This commit also adds an additional check in the test to ensure that the final converted model is in fact quantized, in addition to just checking the qparams in the observers have the right values. python test/test_quantization.py TestQuantizeFx.test_static_lstm_with_custom_fixed_qparams Reviewers: vkuzo Subscribers: vkuzo, supriyar Pull Request resolved: https://github.com/pytorch/pytorch/pull/95537 Approved by: https://github.com/vkuzo
Author
Committer
Parents
Loading