Enhance the documentation for DistributedDataParallel from torch.nn.parallel.distributed (#16010)
Summary:
- a typo fixed
- made the docs consistent with #5108
And maybe one more change is needed. According to the current docs
> The batch size should be larger than the number of GPUs used **locally**.
But shouldn't the batch size be larger than the number of GPUs used **either locally or remotely**? Sadly, I couldn't experiment this with my single GPU.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16010
Differential Revision: D13709516
Pulled By: ezyang
fbshipit-source-id: e44459a602a8a834fd365fe46e4063e9e045d5ce