[quant][pt2e] Add support for FixedQParamsQuantizationSpec (#102439)
Summary:
This PR adds support for FixedQParamsQuantizationSpec:
```
dataclass(eq=True, frozen=True)
class FixedQParamsQuantizationSpec(QuantizationSpecBase):
dtype: torch.dtype
scale: float
zero_point: int
quant_min: Optional[int] = None
quant_max: Optional[int] = None
qscheme: Optional[torch.qscheme] = None
```
This is useful to define quantization spec for operators like sigmoid which has predefined and fixed scale/zero_point
Test Plan:
```
buck2 test mode/opt caffe2/test:quantization_pt2e -- 'caffe2/test:quantization_pt2e'
buck2 test mode/opt caffe2/test:quantization_pt2e -- --exact 'caffe2/test:quantization_pt2e - test_fixed_qparams_qspec (quantization.pt2e.test_quantize_pt2e.TestQuantizePT2E)'
```
Reviewed By: kimishpatel
Differential Revision: D46153082
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102439
Approved by: https://github.com/kimishpatel