[CPU] Support SHM based inference_all_reduce in TorchBackend #5391
support shm based allreduce when torchCCL is not installed
752ea06f
keep deepspeed.comm.inference_all_reduce interface not changed
9f2dd131
delock
changed the title Support SHM based inference_all_reduce in TorchBackend [CPU] Support SHM based inference_all_reduce in TorchBackend 2 years ago
Merge branch 'master' into gma/gloo_shm_allreduce
dd8f9b0b
fix formatting
1cd87c3a
Merge branch 'master' into gma/gloo_shm_allreduce
cbecb8df
Change 'SHM' in op builder name into 'ShareMem'
517b7cb7
add op to inference_all_reduce
516bc358
restore oneccl call parameter
541e79c4
tjruwase
approved these changes
on 2024-04-15
Merge branch 'master' into gma/gloo_shm_allreduce
71e74804
Merge branch 'master' into gma/gloo_shm_allreduce
4149ce01
tjruwase
merged
b22706a7
into master 2 years ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub