DeepSpeed
[CPU] add fp16 support to shm inference_all_reduce
#5669
Merged

[CPU] add fp16 support to shm inference_all_reduce #5669

delock
delock add fp16 support to shm allreduce
0aa626fa
delock delock requested a review from awan-10 awan-10 1 year ago
delock delock requested a review from mrwyattii mrwyattii 1 year ago
delock delock requested a review from arashb arashb 1 year ago
delock fix format
cd89810d
loadams Merge branch 'master' into gma/fp16_allreduce_support
94094678
loadams loadams requested a review from tjruwase tjruwase 1 year ago
loadams loadams requested a review from tohtana tohtana 1 year ago
adk9
delock
delock add more data types for test inference_all_reduce
206e29a4
delock delock requested a review from loadams loadams 1 year ago
delock
adk9
adk9 approved these changes on 2024-06-18
delock fix FP32+world_size=1 bug
44007e2d
delock
delock remove unneeded code
52084b07
delock fix format
4719cba1
delock remove unnecessary comments
9e08a420
tjruwase Merge branch 'master' into gma/fp16_allreduce_support
7b62634b
loadams Merge branch 'master' into gma/fp16_allreduce_support
e077d572
loadams Merge branch 'master' into gma/fp16_allreduce_support
812579f9
loadams loadams enabled auto-merge 1 year ago
disabled auto-merge 1 year ago
Manually disabled by user
loadams Merge branch 'master' into gma/fp16_allreduce_support
ae7497a9
loadams loadams enabled auto-merge 1 year ago
loadams loadams merged 19da95f7 into master 1 year ago

Login to write a write a comment.

Login via GitHub

Assignees
No one assigned
Labels
Milestone