pytorch
aa5e3ad7 - [quant] Support PerChannel quantization in FusedMovingAvgObsFakeQuantize (#62346)

Commit
3 years ago
[quant] Support PerChannel quantization in FusedMovingAvgObsFakeQuantize (#62346) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/62346 Update the operator code to resize the min/max tensors if per-channel quant is selected. We need to do this because by default the observer creates empty tensors for min/max and scale/zero_point values when per-channel quantization is enabled Test Plan: python test/test_quantization.py test_fused_mod_per_channel Imported from OSS Reviewed By: HDCharles Differential Revision: D30003835 fbshipit-source-id: b5ec80261cb50ee543f21191a887e979dcde4667
Author
Parents
Loading