pytorch
39590d06 - Make new_subgroups avaliable for non-cuda depend backend (#99706)

Commit
1 year ago
Make new_subgroups avaliable for non-cuda depend backend (#99706) The `new_subgroups` allows for the easy creation of sub-communication groups, but it currently requires CUDA availability. For communications that do not rely on CUDA, such as the CPU-based gloo or custom communication backends, I still hope to be able to use it, such as with the CPU-based gloo (which is also the case when using a custom backend): ```python import os import torch import torch.distributed as dist import torch.multiprocessing as mp def gloo_process(rank_id, world_size, group_size, mp_lock): assert not torch.cuda.is_available() def lock_print(*args, **kwargs): with mp_lock: print(*args, **kwargs, flush=True) os.environ['MASTER_ADDR'] = '127.0.0.1' os.environ['MASTER_PORT'] = '29500' dist.init_process_group('gloo', rank=rank_id, world_size=world_size) subgroup, _ = dist.new_subgroups(group_size) subgroup_ranks = list(range(subgroup.rank() * group_size, (subgroup.rank() + 1) * group_size)) lock_print(f"Rank {rank_id} initialized in subgroup_{subgroup.rank()}: {subgroup_ranks}") tensor = torch.Tensor([rank_id + 1]) subgroup.broadcast(tensor, root=0) lock_print(f"After broadcast, rank {rank_id} in subgroup_{subgroup.rank()}:{subgroup_ranks} got {tensor}") if __name__ == "__main__": world_size = 4 group_size = 2 processes = [] mp.set_start_method("spawn") mp_lock = mp.Lock() for rank in range(world_size): p = mp.Process(target=gloo_process, args=(rank, world_size, group_size, mp_lock)) p.start() processes.append(p) for p in processes: p.join() ``` ```bash Rank 0 assigned to subgroup_0: [0, 1] Rank 1 assigned to subgroup_1: [2, 3] Rank 2 assigned to subgroup_0: [0, 1] Rank 3 assigned to subgroup_1: [2, 3] After broadcast, rank 2 in subgroup_0:[0, 1] got tensor([3.]) After broadcast, rank 3 in subgroup_1:[2, 3] got tensor([3.]) After broadcast, rank 1 in subgroup_1:[2, 3] got tensor([1.]) After broadcast, rank 0 in subgroup_0:[0, 1] got tensor([1.]) ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99706 Approved by: https://github.com/kumpera
Author
Committer
Parents
Loading