pytorch
f2573944 - [quant] Add QuantizedLSTM class

Commit
2 years ago
[quant] Add QuantizedLSTM class The nn.LSTM is quantized through the custom module mechanism, which uses the nn.quantizable.LSTM for both observed and quantized paths. This is potentially a source of confusion. This creates a `quantized.LSTM` class, which completely takes the quantized path. Note that after this, the old usage will throw an error. New way of using it: ``` >>> custom_module_config = { ... 'float_to_observed_custom_module_class': { ... nn.LSTM: nn.quantizable.LSTM, ... }, ... 'observed_to_quantized_custom_module_class': { ... nn.quantizable.LSTM: nn.quantized.LSTM, ... } ... } >>> tq.prepare(model, prepare_custom_module_class=custom_module_config) >>> tq.convert(model, convert_custom_module_class=custom_module_config) ``` due to weird CI issues with previous PR, old discussion can be found: https://github.com/pytorch/pytorch/pull/71189 Pull Request resolved: https://github.com/pytorch/pytorch/pull/79959 Approved by: https://github.com/z-a-f
Author
Committer
Parents
Loading