pytorch
691c1391 - Do not use TF32 matmul in linalg and DDP tests (#56114)

Commit
3 years ago
Do not use TF32 matmul in linalg and DDP tests (#56114) Summary: This PR does several things to relax test tolerance - Do not use TF32 in cuda matmul in test_c10d. See https://github.com/pytorch/pytorch/issues/52941. - Do not use TF32 in cuda matmul in test_linalg. Increase atol for float and cfloat. See https://github.com/pytorch/pytorch/issues/50453 The tolerance is increased because most linear algebra operators are not that stable in single precision. Pull Request resolved: https://github.com/pytorch/pytorch/pull/56114 Reviewed By: ailzhang Differential Revision: D28554467 Pulled By: ngimel fbshipit-source-id: 90416be8e4c048bedb16903b01315584d344ecdf
Author
Parents
Loading