pytorch
43335cdd - Fold quantize op into module (#25625)

Commit
5 years ago
Fold quantize op into module (#25625) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25625 We want to fold the quantize op for weights/bias into module to avoid quantizing weights on the fly. Test Plan: python test/test_jit.py Imported from OSS Differential Revision: D17208889 fbshipit-source-id: 1854b8953b065855d210bc1166533c08ca264354
Author
Parents
Loading