Update the misleading comments for zero_points and scale in dynamic quant linear module [1/2] (#28767)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28767
The scale and zero_point are for the output activation tensor, not for the weight tensor. We removed them here because we don't need the zero points and scales for the output tensors in dynamic quantization.
ghstack-source-id: 92807318
Test Plan: CI
Differential Revision: D18164949
fbshipit-source-id: 0f9172bfef615c30dc28e1dd4448a9f3cc897c2e