pytorch
ef640355 - [quant][core][gpu][bug fix] Changed at::contiguous call to at::to for quantized cudnn operators

Commit
2 years ago
[quant][core][gpu][bug fix] Changed at::contiguous call to at::to for quantized cudnn operators Summary: `at::contiguous` for conv 1x1 (kernel height == kernel width == 1) produces the wrong strides (still preserves the NCHW format) for channels last format. `at::to` produces the expected NHWC format. This was not an issue with cudnn v 8.3.3, but after upgrading to v 8.4.0, we noticed this error. We have changed `at::contiguous` callsites to `at::to` in `aten/src/ATen/native/quantized/cudnn` Test plan: ``` python test/test_quantization.py -k test_qconv2d_cudnn ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/75559 Approved by: https://github.com/jerryzh168
Author
Committer
Parents
Loading