Add shape inference functions for int8 quantization related ops (#41215)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41215
To unblock int8 model productization on accelerators, we need the shape and type info for all the blobs after int8 quantization. This diff added shape inference functions for int8 quantization related ops.
Test Plan:
```
buck test caffe2/caffe2/quantization/server:int8_gen_quant_params_test
buck test caffe2/caffe2/quantization/server:fully_connected_dnnlowp_op_test
```
Reviewed By: hx89
Differential Revision: D22467487
fbshipit-source-id: 8298abb0df3457fcb15df81f423f557c1a11f530