pytorch
14b0e9e7 - [cuDNN] Don't enforce bitwise exact results in `test_conv_transposed_large_cuda` (#78147)

Commit
2 years ago
[cuDNN] Don't enforce bitwise exact results in `test_conv_transposed_large_cuda` (#78147) `test_conv_transposed_large` expects bitwise perfect results in fp16 on CUDA, but this behavior isn't guaranteed by cuDNN (e.g., in the case of FFT algos). This PR just changes the tolerance on the test to account for these cases. CC @ptrblck @ngimel Pull Request resolved: https://github.com/pytorch/pytorch/pull/78147 Approved by: https://github.com/ngimel
Author
eqy eqy
Committer
Parents
Loading