DeepSpeed
[CPU] add fp16 support to shm inference_all_reduce
#5669
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
12
Changes
View On
GitHub
[CPU] add fp16 support to shm inference_all_reduce
#5669
loadams
merged 12 commits into
deepspeedai:master
from
delock:gma/fp16_allreduce_support
add fp16 support to shm allreduce
0aa626fa
delock
requested a review
from
awan-10
1 year ago
delock
requested a review
from
mrwyattii
1 year ago
delock
requested a review
from
arashb
1 year ago
fix format
cd89810d
Merge branch 'master' into gma/fp16_allreduce_support
94094678
loadams
requested a review
from
tjruwase
1 year ago
loadams
requested a review
from
tohtana
1 year ago
add more data types for test inference_all_reduce
206e29a4
delock
requested a review
from
loadams
1 year ago
adk9
approved these changes on 2024-06-18
fix FP32+world_size=1 bug
44007e2d
remove unneeded code
52084b07
fix format
4719cba1
remove unnecessary comments
9e08a420
Merge branch 'master' into gma/fp16_allreduce_support
7b62634b
Merge branch 'master' into gma/fp16_allreduce_support
e077d572
Merge branch 'master' into gma/fp16_allreduce_support
812579f9
loadams
enabled auto-merge
1 year ago
disabled auto-merge
1 year ago
Manually disabled by user
Merge branch 'master' into gma/fp16_allreduce_support
ae7497a9
loadams
enabled auto-merge
1 year ago
loadams
merged
19da95f7
into master
1 year ago
Login to write a write a comment.
Login via GitHub
Reviewers
adk9
awan-10
mrwyattii
arashb
tjruwase
tohtana
loadams
Assignees
No one assigned
Labels
None yet
Milestone
No milestone
Login to write a write a comment.
Login via GitHub