benchmark
3a270fa9 - Support activation quantization without scaling (#2607)

Commit
231 days ago
Support activation quantization without scaling (#2607) Summary: Pull Request resolved: https://github.com/pytorch/benchmark/pull/2607 X-link: https://github.com/pytorch/pytorch/pull/148380 We enable the activation quantization in the forward pass, and users can customize the dtype they want to quantize. Reviewed By: Hahu803, avicizhu Differential Revision: D70522237 fbshipit-source-id: 9c501506e8bd40a1199fafb2e28e6384e7df4786
Author
Parents
Loading