pytorch
02548800 - NCCL process group: avoid workEnqueue when capturing cuda graph (#103503)

Commit
1 year ago
NCCL process group: avoid workEnqueue when capturing cuda graph (#103503) Summary: In torch.distributed, we make ProcessGroupNCCL not call workEnqueue when the cuda stream is capturing. I.e., when capturing a CUDA graph, we do not enqueue anything for the watchdog thread to consider. This allows capturing NCCL operations in a CUDA Graph. This is followup to an internal discussion [1] where the watchdog thread was observed to crash when using cuda graphs containing an all_reduce. The watchdog thread wants to query events pertaining to enqueued work items, but this can't be done for "events" created during cuda graph capture. [1] https://fb.workplace.com/groups/1405155842844877/posts/6975201909173548/ This is another attempt at https://github.com/pytorch/pytorch/pull/102542 / D46274814, fixing the test failures. Test Plan: The repro mentioned in https://fb.workplace.com/groups/1405155842844877/posts/7003002339726838/ runs successfully after this change. Differential Revision: D46683554 Pull Request resolved: https://github.com/pytorch/pytorch/pull/103503 Approved by: https://github.com/kwen2501
Author
Committer
Parents
Loading