pytorch
f933fa36 - [docs][1.5] update RPC docs to reflect correct use of dist_autograd backwards and dist_optim step() (#34670)

Commit
4 years ago
[docs][1.5] update RPC docs to reflect correct use of dist_autograd backwards and dist_optim step() (#34670) Summary: - Clarify that `torch.distributed.autograd.backwards()` does not use the current thread local autograd context, instead it looks it up based on the context_id passed in - Clarify the same for `torch.distributeed.optimizer.optim.step()` Pull Request resolved: https://github.com/pytorch/pytorch/pull/34670 Differential Revision: D20427645 Pulled By: rohan-varma fbshipit-source-id: a1a88de346cdd4dbe65fb2b7627157f86fd2b6a3
Author
Parents
Loading