Quantization aware training: Freeze batch norm support (#26624)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26624
For QAT we need to be able to control batch norm for all modules from the top. Adding helper functions to enable/disable batch norm freezing during training
ghstack-source-id: 91008297
Test Plan: buck test caffe2/test:quantization -- --print-passing-details
Differential Revision: D17512199
fbshipit-source-id: f7b981e2b1966ab01c4dbb161030177274a998b6