llama.cpp
014dca49
- CUDA: manage NCCL communicators in context (#21891)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
9 days ago
CUDA: manage NCCL communicators in context (#21891) * CUDA: manage NCCL communicators in context * add check that all backends are CUDA * remove unused vector, limit init to > 1 GPUs * fix warnings * fix cuda device, cache allreduce
References
#21891 - CUDA: manage NCCL communicators in context
Author
JohannesGaessler
Parents
adb541a6
Loading