[quant] Optionally clamp weights post quantization (#83438)
Summary: Until we add quant_{min, max} args to `torch.quantize_per_{channel, tensor}`, this patch will make sure we will honor observer's restrictions on quantized values.
Test Plan: Added new tests, run with - `buck run caffe2/test:quantization -- quantization.core.test_utils`
Differential Revision: D38624119
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83438
Approved by: https://github.com/andrewor14