DeepSpeed
[CPU] Support SHM based inference_all_reduce in TorchBackend
#5391
Merged

Commits
  • support shm based allreduce when torchCCL is not installed
    delock committed 2 years ago
  • keep deepspeed.comm.inference_all_reduce interface not changed
    delock committed 2 years ago
  • Merge branch 'master' into gma/gloo_shm_allreduce
    loadams committed 2 years ago
  • fix formatting
    delock committed 2 years ago
  • Merge branch 'master' into gma/gloo_shm_allreduce
    tjruwase committed 2 years ago
  • Change 'SHM' in op builder name into 'ShareMem'
    delock committed 2 years ago
  • add op to inference_all_reduce
    delock committed 2 years ago
  • restore oneccl call parameter
    delock committed 2 years ago
  • Merge branch 'master' into gma/gloo_shm_allreduce
    tjruwase committed 2 years ago
  • Merge branch 'master' into gma/gloo_shm_allreduce
    loadams committed 2 years ago
Loading