pytorch
1ca82f97 - Perform appropriate CUDA stream synchronization in distributed autograd.

Commit
4 years ago
Perform appropriate CUDA stream synchronization in distributed autograd. The local autograd engine performs appropriate stream synchronization between autograd nodes in the graph to ensure a consumer's stream is synchronized with the producer's stream before executing the consumer. However in case of distributed autograd, the SendRpcBackward function receives gradients over the wire and TensorPipe uses its own pool of streams for this purpose. As a result, the tensors are received on TensorPipe's stream pool but SendRpcBackward runs on a different stream during the backward pass and there is no logic to synchronize these streams. To fix this, I've enhanced DistEngine to synchronize these streams appropriately when it receives grads over the wire. Differential Revision: [D27025307](https://our.internmc.facebook.com/intern/diff/D27025307/) [ghstack-poisoned]
Author
Committer
Parents
Loading