pytorch
de949a0e - Various OpInfo architecture improvements

Commit
2 years ago
Various OpInfo architecture improvements This PR makes the following improvements: - moves the custom skip list for test_normalize_operator_exhaustive in test_fx_experimental to use the typical OpInfo skip architecture. The skips were updated to xfails, and that identified some operators which were no longer failing the test - redundant tests with OpInfo-based testing in test_jit.py were removed - test_dtypes was improved so its error messages are clear and it makes test_nondifferentiable redundant; the latter test has been removed - OpInfo.supports_complex_autograd() is removed in favor of a more accurate and general test for whether the particular dtype is in the backward dtypes of the operator - gradchecks have been improved to verify that an operator doesn't support grad if it claims not to - gradchecks have been improved to test the gradient of all input tensors that require gradient - the concept of "default test dtypes" has been removed - excessive and mostly redundant out testing for elementwise unary operators has been removed - metadata for whether an op supports nuanced "safe casting" to out behavior has been removed from OpInfos - numerous skips have been converted to xfails - numerous OpInfos have had their metadata fixed based on the new checks - jit-specific utilities in common_methods_invocations.py have been moved to jit_programming_utils.py Pull Request resolved: https://github.com/pytorch/pytorch/pull/75951 Approved by: https://github.com/ngimel
Author
Mike Ruberry
Committer
Parents
Loading