[PyTorch] Avoid storage refcount bump in copy_tensor_metadata (#48877)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48877
Setting `Storage` in the TensorImpl ctor only to set it again in
`copy_tensor_metadata` wastes one refcount bump.
ghstack-source-id: 117937872
Test Plan:
internal benchmark. compared results with perf, saw 0.15%
reduction in percent of total time spent in
`TensorImpl::shallow_copy_and_detach`.
Reviewed By: bhosmer
Differential Revision: D25353529
fbshipit-source-id: e85d3a139ccd44cbd059c14edb19b22b962881a9