pytorch
87cf277b - Don't allocate result Tensors in out overloads: _linalg_solve_out_helper_cuda (#55321)

Commit
3 years ago
Don't allocate result Tensors in out overloads: _linalg_solve_out_helper_cuda (#55321) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55321 We have some operators that previously allowed you to pass in an undefined tensor to the out argument, and then would go on to allocate that for you. This behavior is broken and doesn't work in JIT when things are converted to/from IValues. Because of this, it blocks backend fallbacks because they force going through IValue. This PR is one in a series to remove that behavior and forces out arguments to be defined tensors. It only looks at at::_linalg_solve_out_helper_cuda(), but there's more PRs for other ops. ghstack-source-id: 125886984 (Note: this ignores all push blocking failures!) Test Plan: waitforsandcastle Reviewed By: ngimel Differential Revision: D27572759 fbshipit-source-id: 5bca60b39c513b8d85fe282ebd4d66607d54774f
Author
Parents
Loading