pytorch
6c31f56b - [Gradient Compression] Add cuda.syncrhonize back to batched powerSGD (#54838)

Commit
4 years ago
[Gradient Compression] Add cuda.syncrhonize back to batched powerSGD (#54838) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54838 Realize that an explicit sync is somehow still needed for batched PowerSGD hook. I find that a job failure can be fixed by this change. The sync was once removed by #54482. Test Plan: f260900882 f260899693 Reviewed By: rohan-varma Differential Revision: D27384738 fbshipit-source-id: 3efd738b9fd375e2ceb36ed3a6bf99cd8ce8ff95
Author
Yi Wang
Parents
Loading