pytorch
f86ec081 - [pytorch][quantization] adding jit state for QuantizedLeakyReLU (#47660)

Commit
4 years ago
[pytorch][quantization] adding jit state for QuantizedLeakyReLU (#47660) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47660 Currently, `QuantizedLeakyReLU` doesn't have any items in the `state_dict`. However, this operator needs to store the `scale` and `zero_point` in its state dictionary or the loading state dict for a quantized model with LeakyReLUs that have non-default quantization params would break. Test Plan: Originally the issue was found here: https://www.internalfb.com/intern/anp/view/?id=390362&revision_id=2510709822565735 In the latest version, I fixed this issue: https://www.internalfb.com/intern/anp/view/?id=390362 Reviewed By: jerryzh168 Differential Revision: D24757522 fbshipit-source-id: 57e1dea072b5862e65e228e52a86f2062073aead
Author
Ayush Saraf
Parents
Loading