eager quant: remove fake_quant after add/mul nodes during QAT (#49213)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49213
Changes behavior of Eager mode quantization to remove observation after add_scalar/mul_scalar.
This is not used, and it removes one difference between Eager and FX modes.
Test Plan:
```
python test/test_quantization.py TestQuantizeFxOps.test_quantized_add_qat
python test/test_quantization.py TestQuantizeFxOps.test_quantized_mul_qat
python test/test_quantization.py TestQuantizationAwareTraining.test_add_scalar_uses_input_qparams
python test/test_quantization.py TestQuantizationAwareTraining.test_mul_scalar_uses_input_qparams
```
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D25486276
fbshipit-source-id: 34a5d6ce0d08739319ec0f8b197cfc1309d71040