pytorch
6942fccf - Skip superfluous storage allocations while constructing meta tensors (#65331)

Commit
2 years ago
Skip superfluous storage allocations while constructing meta tensors (#65331) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65331 ghstack-source-id: 148862595 This is a performance optimization for the use case: ``` tensor = torch.tensor(<large_data>, device='meta') ``` where the current implementation requires a superfluous memory allocation on CPU even though the target device is a meta. Test Plan: Run existing tests since no behavioral change is introduced. Reviewed By: ezyang Differential Revision: D31055036 fbshipit-source-id: 04d6c13594a71fc65bf2fbd567ee71833a879851 (cherry picked from commit 489d0a151a5fc4f5a0d8e3e65897bf7d02affe4b)
Author
Committer
Parents
Loading