Expands OpInfo out= testing (#53259)
Summary:
Addresses several of the challenges described in https://github.com/pytorch/pytorch/issues/49468.
This PR builds on https://github.com/pytorch/pytorch/pull/50741 and https://github.com/pytorch/pytorch/issues/53105 to extend OpInfo out= testing. It covers the following cases for ops that produce a single tensor:
- out= values don't affect computation
- out= noncontiguous produces the correct output and preserves strides
- out= with the wrong shape throws a warning
- out= with an empty tensor throws no warning
- out= with the wrong device throws an error
- out= with a dtype the computation's result can't be "safely" cast to throws an error
It works with operations that produce a single tensor and operations that produce an iterable of tensors (the latter is tested with operations like torch.svd).
In addition to the new out= test, the OpInfos have been updated. "supports_tensor_out" is replaced with the more general and straightforward "supports_out" metadata, and many operations which previously had to skip out= testing with an explicit SkipInfo no longer need to. A couple redundant tests in test_unary_ufuncs.py have been removed, too.
One other perk of these tests is that once all operations have OpInfos this will allow us to validate that we've universally deprecated incorrectly sized tensors passed to out=, and give us the option to actually disable the behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53259
Reviewed By: mrshenli
Differential Revision: D26894723
Pulled By: mruberry
fbshipit-source-id: 2b536e9baf126f36386a35f2f806dd88c58690b3