pytorch
15495f2d - [quant][pt2e] Introduce QuantizationAnnotation API (#101708)

Commit
1 year ago
[quant][pt2e] Introduce QuantizationAnnotation API (#101708) Summary: This diff adds QuantizationAnnotation and also refactors the existing annotation to use this object ``` dataclass class QuantizationAnnotation: # How some input nodes should be quantized, expressed as QuantizationSpec # a map from torch.fx.Node to QuantizationSpec input_qspec_map: Dict[Node, QuantizationSpec] # How the output of this node is quantized, expressed as QuantizationSPec output_qspec: QuantizationSpec class QuantizationSpec: dtype: torch.dtype is_dynamic: bool = False quant_min: Optional[int] = None quant_max: Optional[int] = None qscheme: Optional[torch.qscheme] = None ch_axis: Optional[int] = None # TODO: follow up PR will add this # Kind of observer such as MinMaxObserver, PerChannelHistogramObserver etc. # observer_or_fake_quant_type: Union[ObserverBase, FakeQuantizeBase] ``` Example after full refactor: ``` int8_qspec = QuantizationSpec(dtype=torch.int8, ...) weight_qspec = QuantizationSpec(dtype=torch.int8, ...) conv_node["quantization_annotation"] = QuantizationAnnotation( input_qspec_map={input_node: int8_qspec, weight_node: weight_qspec} output_qspec=int8_qspec, ) ``` Note: right now input_qspec_map and output_qspec map are still using observer and fake quant constructors. Follow up PR: change the input_qspec_map and output_qspec to use QuantizationSpec directly Test Plan: ``` buck2 test mode/optcaffe2/test:quantization_pt2e -- --exact 'caffe2/test:quantization_pt2e - test_resnet18_with_quantizer_api (quantization.pt2e.test_quantize_pt2e.TestQuantizePT2EModels)' ``` Differential Revision: D45895027 Pull Request resolved: https://github.com/pytorch/pytorch/pull/101708 Approved by: https://github.com/andrewor14
Author
Committer
Parents
Loading