Freeze dynamic (re)quantizaiton ops into standard ones (#42591)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42591
We don't support lowering with 2-input Int8Quantize and 4-input Int8FC. Just do a conversion to absorb the quantization params into the op itself.
Test Plan:
```
buck test caffe2/caffe2/quantization/server:quantize_dnnlowp_op_test
```
Reviewed By: benjibc
Differential Revision: D22942673
fbshipit-source-id: a392ba2afdfa39c05c5adcb6c4dc5f814c95e449