[ao][fx] Enable observed -> quantized float for static quantized MultiheadAttention (#95636)
Test Plan:
Sandcastle
cc andrewor14 any suggestions here?
Differential Revision: D43631794
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95636
Approved by: https://github.com/andrewor14