Use mkldnn copy for copy_ when self and src are Mkldnn layout (#54248)
Summary:
Currently, when copy_ is called with Mkldnn layout, a RuntimeError is raised.
**Environment**
- CPU : Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
- PyTorch master(1772e26f6380d1)
- build with USE_MKLDNN=1
**Sample code to reproduce:**
```python
import torch
x = torch.randn(4, 5, dtype=torch.float32)
mkldnn_x = x.to_mkldnn()
mkldnn_y = torch.randn(4, 5, dtype=torch.float32).to_mkldnn()
mkldnn_y.copy_(mkldnn_x)
print(x)
print(mkldnn_y.to_dense())
```
**Results:**
Actual:
```sh
Traceback (most recent call last):
File "mkldnn_copy.py", line 6, in <module>
mkldnn_y.copy_(mkldnn_x)
RuntimeError: unsupported tensor layout: Mkldnn
```
Expected:
```sh
# x
tensor([[ 0.1276, -0.1179, 1.1970, 2.4836, 1.9059],
[-1.9647, 0.8613, -0.5060, 0.1555, 0.3661],
[-0.1560, -0.2133, 0.3414, -1.7095, -2.3431],
[ 1.3291, 0.3083, 0.5523, -2.0577, -0.4740]])
# mkldnn_y
tensor([[ 0.1276, -0.1179, 1.1970, 2.4836, 1.9059],
[-1.9647, 0.8613, -0.5060, 0.1555, 0.3661],
[-0.1560, -0.2133, 0.3414, -1.7095, -2.3431],
[ 1.3291, 0.3083, 0.5523, -2.0577, -0.4740]])
```
This is because `copy_` does not support Mkldnn layout.
So I modified to call `copy_mkldnn_` in `copy_` when both `self` and `src` are Mkldnn layout.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54248
Reviewed By: mrshenli
Differential Revision: D27641352
Pulled By: ezyang
fbshipit-source-id: 70a37cdacb4a40b250ca16f2f6ddb6b71ff52d90