Cudnn conv cache key patch (#81418)
Fixes #81106
Patches on cudnn algo cache to consider the right memory_format used in descriptors, instead of blindly copy the memory_format on inputs.
Note that to be on the safe side, we could actually cache on all tensor strides instead. But given how we short-cut and align memory_format from pytorch tensor to cudnn descriptor, it suffice to have a single field in the cache.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81418
Approved by: https://github.com/ngimel