pytorch
48318eba - Fix TestOpInfoCUDA.test_unsupported_dtypes_addmm_cuda_bfloat16 on ampere (#50440)

Commit
3 years ago
Fix TestOpInfoCUDA.test_unsupported_dtypes_addmm_cuda_bfloat16 on ampere (#50440) Summary: The `TestOpInfoCUDA.test_unsupported_dtypes_addmm_cuda_bfloat16` in `test_ops.py` is failing on ampere. This is because addmm is supported on Ampere, but the test is asserting that it is not supported. Pull Request resolved: https://github.com/pytorch/pytorch/pull/50440 Reviewed By: mrshenli Differential Revision: D25893326 Pulled By: ngimel fbshipit-source-id: afeec25fdd76e7336d84eb53ea36319ade1ab421
Author
Parents
Loading