functionalization: fix striding for out-of-place copy() (#82009)
Previously we'd implemented `at::native::copy()` using `expand()` for efficiency, but doing it that way actually prevents `copy()` from following the same semantics as `copy_()`.
The output of both `copy_()` and `copy()` should have the same amount of storage as the original tensor, so that you can use it in the same set of operations.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82009
Approved by: https://github.com/ezyang