Change fbgemm_linear_{int8,fp16}_weight to fbgemm_linear_{int8,fp16}_weight_fp32_activation (#22955)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22955
Following the comment in https://github.com/pytorch/pytorch/pull/22891, change the fbgemm wrapper function name to indicate whether it is dynamic quantization or static quantization.
Differential Revision: D16297512
fbshipit-source-id: 498678e2af27070628be11a6d724ce17c2a3cde5