[quant][fx][improvement] Add lowering support for BatchNormQuantizeHandler (#72490)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72490
This is an effort to move the current implementation towards the reference quantized model design:
https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md
so that we use reference model in the default fbgemm/qnnpack path
Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps.test_qbatch_norm
Imported from OSS
Reviewed By: vkuzo, andrewor14
Differential Revision: D34062365
fbshipit-source-id: ed015c61f5b969554a6477f92cf6be2358cb558c
(cherry picked from commit 9498421dddddd984c27f74a1c8c5ca87d6bdc474)