pytorch
13f28df4 - disable contiguity on cross dimensional overlapped tensor

Commit
3 years ago
disable contiguity on cross dimensional overlapped tensor Unmarked contiguity on stride properties when we have dimensions potentially covering overlapping memory. This check could be done more accurately, per dimension instead of a global flag per tensor. I'm just keeping it simple here, as the existing code gives us correctness and that's what's important. Pull Request resolved: https://github.com/pytorch/pytorch/pull/74359 Approved by: https://github.com/ngimel, https://github.com/malfet
Author
Committer
Parents
Loading