pytorch
29881c7f - Fix LSTM int8 quantization model size issue (#23577)

Commit
5 years ago
Fix LSTM int8 quantization model size issue (#23577) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23577 This diff is fixing a model size issue introduced in #23291. After that PR, the model size after in8 quantization is the same as that of the original unquantized model. The reason is that we save original weight for int8 quantization even when that's not needed anymore. This diff fixes that by only saving original weight for fp16 quantization path. Reviewed By: llyfacebook Differential Revision: D16557619 fbshipit-source-id: f924ae8d155a0d525b86a7440b3c7147d5bead0a
Author
Parents
Loading