pytorch
3b00b17f - [docs] Updated quantization docs to show per channel support for conv1d (#81349)

Commit
3 years ago
[docs] Updated quantization docs to show per channel support for conv1d (#81349) Summary: There is currently per channel quantization support for Conv1d, however this was not highlighted by the documentation for quantization when discussion which modules have per channel quantization support. This adds that there is exisiting support for Conv1d, with evidence reproducable through the test plan below. Test Plan: ``` class SingleLayerModel(torch.nn.Module): def __init__(self): super().__init__() self.conv1d = torch.nn.Conv1d(5, 5, 1).to(dtype=torch.float) def forward(self, x): x = self.conv1d(x) return x def get_example_inputs(self): return (torch.rand(5, 5, 1),) torch.backends.quantized.engine = "fbgemm" model = SingleLayerModel() example_input = model.get_example_inputs()[0] q_config = q_config_mapping = QConfigMapping() q_config_mapping.set_global(torch.ao.quantization.get_default_qconfig(torch.backends.quantized.engine)) prepared = quantize_fx.prepare_fx(model, q_config_mapping, example_input) print(prepared.conv1d.qconfig.weight.p.func) ``` Printing the above lines shows that the Conv1d has a PerChannelMinMaxObserver. To show that this doesn't work for everything, if you replace the Conv1d with a ConvTranspose1d, you will see running the same code above that there is an error thrown about lack of support. Reviewers: Subscribers: Tasks: Tags: Pull Request resolved: https://github.com/pytorch/pytorch/pull/81349 Approved by: https://github.com/andrewor14
Author
Committer
Parents
Loading