pytorch
f56720ea - Optimize transpose copy on CPU using fbgemm transpose (#83327)

Commit
3 years ago
Optimize transpose copy on CPU using fbgemm transpose (#83327) ### Description Optimize transpose copy on CPU using fbgemm transpose ### Testing single socket (28cores): ``` before: torch.Size([10, 128, 10, 124]) -> torch.Size([10, 128, 124, 10]) fp32: 4.819e-05 ms; bf16: 4.846e-05 ms torch.Size([10, 128, 30, 124]) -> torch.Size([10, 128, 124, 30]) fp32: 0.000171 ms; bf16: 0.000129 ms after: torch.Size([10, 128, 10, 124]) -> torch.Size([10, 128, 124, 10]) fp32: 2.439e-05 ms; bf16: 2.152e-05 ms torch.Size([10, 128, 30, 124]) -> torch.Size([10, 128, 124, 30]) fp32: 0.000132 ms; bf16: 3.916e-05 ms ``` single core: ``` before: torch.Size([10, 128, 10, 124]) -> torch.Size([10, 128, 124, 10]) fp32: 0.00109 ms; bf16: 0.00103 ms torch.Size([10, 128, 30, 124]) -> torch.Size([10, 128, 124, 30]) fp32: 0.00339 ms; bf16: 0.00295 ms after: torch.Size([10, 128, 10, 124]) -> torch.Size([10, 128, 124, 10]) fp32: 0.000566 ms; bf16: 0.000382 ms torch.Size([10, 128, 30, 124]) -> torch.Size([10, 128, 124, 30]) fp32: 0.00282 ms; bf16: 0.000999 ms ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/83327 Approved by: https://github.com/frank-wei
Author
Committer
Parents
Loading