pytorch
b82df92c - [quant] Fix qmin/qmax when using customized qrange (#74717)

Commit
2 years ago
[quant] Fix qmin/qmax when using customized qrange (#74717) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/74717 currently the weight map to 0 and max_float to 65535 due to incorrect qmin/qmax in qin16 customized qrange the expectation from the set observers is the integer representation is supposed to be a signed int16 i.e -32768 to 32767. Test Plan: Imported from OSS Reviewed By: jerryzh168 Differential Revision: D35129924 fbshipit-source-id: 924902dd7e64c1218971422ba2451c2a484fd2f4 (cherry picked from commit 95659cdeeec7b3a01a64355244847e211c6dd2a6)
Author
Committer
Parents
Loading