DeepSpeed
[CPU] Support SHM based inference_all_reduce in TorchBackend
#5391
Merged

[CPU] Support SHM based inference_all_reduce in TorchBackend #5391

delock
delock support shm based allreduce when torchCCL is not installed
752ea06f
delock keep deepspeed.comm.inference_all_reduce interface not changed
9f2dd131
delock delock requested a review from awan-10 awan-10 2 years ago
delock delock requested a review from mrwyattii mrwyattii 2 years ago
delock delock requested a review from arashb arashb 2 years ago
delock delock changed the title Support SHM based inference_all_reduce in TorchBackend [CPU] Support SHM based inference_all_reduce in TorchBackend 2 years ago
loadams Merge branch 'master' into gma/gloo_shm_allreduce
dd8f9b0b
delock fix formatting
1cd87c3a
delock
tjruwase Merge branch 'master' into gma/gloo_shm_allreduce
cbecb8df
tjruwase
tjruwase commented on 2024-04-12
tjruwase
tjruwase commented on 2024-04-12
delock Change 'SHM' in op builder name into 'ShareMem'
517b7cb7
delock add op to inference_all_reduce
516bc358
delock restore oneccl call parameter
541e79c4
tjruwase
tjruwase approved these changes on 2024-04-15
tjruwase Merge branch 'master' into gma/gloo_shm_allreduce
71e74804
loadams Merge branch 'master' into gma/gloo_shm_allreduce
4149ce01
tjruwase tjruwase merged b22706a7 into master 2 years ago

Login to write a write a comment.

Login via GitHub

Assignees
No one assigned
Labels
Milestone