[PyTorch Edge] Add Quantized Softmax Op (Naive Implementation) (#75017)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75017
This version just does dequantize, fp32 softmax, quantize.
Another version of actual quantized softmax using qnnpack will be added next
Test Plan:
From fbcode:
```buck test caffe2/test:quantization -- test_qsoftmax```
Benchmarking: See summary of D34996486
Reviewed By: kimishpatel
Differential Revision: D34943147
fbshipit-source-id: 426a0780803597a21460139c67960891d6e9cc81
(cherry picked from commit 524eede541773299fc015f47c6cd6275ed5cf421)