extra_repr for quantized modules (#24443)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24443
This gives us useful information about the Module when we print it, like so:
```
FloatModule(
(quant): Quantize()
(conv1): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1), scale=0.08209919929504395, zero_point=128)
(conv2): Conv2d(20, 50, kernel_size=(5, 5), stride=(1, 1), scale=0.16885940730571747, zero_point=128)
(fc1): Linear(in_features=800, out_features=500, bias=True, scale=0.12840059399604797, zero_point=128)
(fc2): Linear(in_features=500, out_features=10, bias=True, scale=0.260015606880188, zero_point=128)
(dequant): DeQuantize()
)
```
Test Plan: Imported from OSS
Differential Revision: D16847140
Pulled By: jamesr66a
fbshipit-source-id: 8c995108f17ed1b086d1fb30471a41c532c68080