pytorch
4d9c017d - Fix the padding issue of quantized average pool operator (#28260)

Commit
5 years ago
Fix the padding issue of quantized average pool operator (#28260) Summary: This is actually a bug in both testing and the average pool implementation. In testing, we used the quantized value as float input and failed to padding the value with zero_point. In op implementation, the size for averaging is not correct for padding case when count_include_pad is true. Pull Request resolved: https://github.com/pytorch/pytorch/pull/28260 Differential Revision: D18039960 Pulled By: lly-zero-one fbshipit-source-id: 7b5d34498b60f5d574a276a22798c9f576944734
Author
Parents
Loading