Remove dilation restriction on cuDNN ConvTranspose2d (#46290)
Summary:
Close https://github.com/pytorch/pytorch/issues/31690
I have verified the functionality of ConvTranspose2d (with this PR) on roughly 32,000 random shapes on V100, A100, using cuDNN 8.0.4 and CUDA 11.1. The 32,000 shapes contain 4x8,000 of (fp16, fp32) x (nchw, nhwc) each.
The random shapes are sampled from
```jsonc
{
"batch_size": {"low": 1, "high": 8},
"in_channels": {"low": 16, "high": 128},
"out_channels": {"low": 16, "high": 128},
"height": {"low": 16, "high": 224},
"stride": {"set": [[1, 1], [2, 2]]},
"padding": {"set": [[0, 0]]},
"output_padding": {"set": [[0, 0], [1, 1], [0, 1], [1, 0]]},
"kernel_size": {"set": [[3, 3], [1, 1], [1, 3], [3, 1], [2, 2]]},
"dilation": {"set": [[1, 1]]},
"deterministic": {"set": [true, false]},
"benchmark": {"set": [true, false]},
"allow_tf32": {"set": [true, false]},
"groups": {"set": [1, IN_CHANNELS]}
}
```
- Input `width` is the same as `height`.
- `groups` can be either 1, or the same as `in_channels` (grouped convolution). When `groups` is 1, `out_channels` is random; when `groups` is the same as `in_channels`, `out_channels` is also the same as `in_channels`
All of the checked shapes can be found in csv files here https://github.com/xwang233/code-snippet/tree/master/convtranspose2d-dilation/functionality-check-cudnn8.0.4.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46290
Reviewed By: mruberry
Differential Revision: D24422091
Pulled By: ngimel
fbshipit-source-id: 9f0120f2995ae1575c0502f1b2742390d7937b24