pytorch
ffaae32d - [Gradient Compression] Allow PowerSGD to run vallina allreduce for the first K iterations (#50973)

Commit
5 years ago
[Gradient Compression] Allow PowerSGD to run vallina allreduce for the first K iterations (#50973) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50973 This can extend the original PowerSGD method to a hybrid approach: vanilla allreduce + PowerSGD. This can help further improve the accuracy, at the cost of a lower speedup. Also add more comments on the fields in `PowerSGDState`. Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202 ghstack-source-id: 120257202 Test Plan: buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl buck test mode/dev-nosan caffe2/test/distributed:distributed_nccl_fork -- test_DistributedDataParallel_powerSGD_ddp_comm_hook Reviewed By: rohan-varma Differential Revision: D26031478 fbshipit-source-id: d72e70bb28ba018f53223c2a4345306980b3084e
Author
Yi Wang
Parents
Loading