pytorch
282f4ab9 - Workaround for bug in DistributedDataParallel (#46186)

Commit
4 years ago
Workaround for bug in DistributedDataParallel (#46186) Summary: Fix the DistributedDataParallelSingleProcessTest to work around a limitation in DistributedDataParallel where the batch_size needs to evenly divide by the number of GPUs used See https://github.com/pytorch/pytorch/issues/46175 Pull Request resolved: https://github.com/pytorch/pytorch/pull/46186 Reviewed By: bdhirsh Differential Revision: D24264664 Pulled By: mrshenli fbshipit-source-id: 6cfd6d29e97f3e3420391d03b7f1a8ad49d75f48
Author
Parents
Loading