onnxruntime
742d4135 - Fix bug related to export failure for DynamicQuantizeLSTM [issue 15465] (#20160)

Commit
1 year ago
Fix bug related to export failure for DynamicQuantizeLSTM [issue 15465] (#20160) ### Description See issue 15465: https://github.com/microsoft/onnxruntime/issues/15465 This PR just applies the workaround suggested in the thread that I and numerous others on the thread have validated to work for them and allows them to successfully export a PyTorch model with LSTM layers that are dynamically quantized by ONNX. ### Motivation and Context It is not possible to successfully export a dynamically quantized LSTM model that I have trained for use in the onnx runtime without this change. Currently, this workaround lives as a local change in my python package directory, and makes it basically impossible for anyone else at the place I work at to successfully export the quantized model that I am exporting. See issue 15465: https://github.com/microsoft/onnxruntime/issues/15465 Co-authored-by: Dhruv Matani <dhruv.matani@grammarly.com>
Author
Parents
Loading