pytorch
f1dbfe2f - [ao][fx] Enable observed -> quantized float for static quantized MultiheadAttention (#95636)

Commit
3 years ago
[ao][fx] Enable observed -> quantized float for static quantized MultiheadAttention (#95636) Test Plan: Sandcastle cc andrewor14 any suggestions here? Differential Revision: D43631794 Pull Request resolved: https://github.com/pytorch/pytorch/pull/95636 Approved by: https://github.com/andrewor14
Author
Committer
Parents
Loading