Refactor convolution_backward's cudnn cases (#71491)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71491
Changed the Cudnn and CudnnTranspose cases to only make the input
contiguous when it is needed for the grad_weight computation.
Reading the implementation of cudnn_convolution_transpose_backward and
cudnn_convolution_backward give me confidence that `input` isn't used
for the grad_weight computation. However, the memory format logic is so
convoluted that I'm 100$ sure this correct. All the tests though
and on request I can directly pass `backend_memory_format` to
{cudnn_convolution_backward, cudnn_convolution_transpose_backward}.
Test Plan: - pytest test/test_nn.py -v -k "conv"
Reviewed By: jbschlosser
Differential Revision: D33664694
Pulled By: zou3519
fbshipit-source-id: 9f4929686fe34f7aaf5331bfa49e98022b9d6c08
(cherry picked from commit 9e2ba0daca88139f7941bcb56bbc23825585d7a2)