pytorch
e9db51f9 - Enable float requantization for avgpool/gavgpool ops. (#37037)

Commit
4 years ago
Enable float requantization for avgpool/gavgpool ops. (#37037) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37037 For avgpool and gavgpool change requantization scheme: Similar to conv and linear now convert the accumulated int32 values to float, apply requantization scale which includes the averaging multiplier. Conver the resulting float value to int32. Add output_zero_point. Benchmark numbers compared to baseline: % speedup on pixel XL. ------------------------------ | | aarch32 | aarch64| |avgpool | .4 | 13.6 | |gavgpool | -2.6% | 3.5% | ------------------------------- Test Plan: Tested via q8avgpool-test, q8gavgpool-test, average-pooling-test and global-average-pooling-test in PT QNNPACK. Also via integated test_quantized.py. python test/quantization/test_quantized.py Imported from OSS Differential Revision: D21168981 fbshipit-source-id: 9060324304603ca7fd380c788a87b01a6d586c5c
Author
Parents
Loading