llama.cpp
014dca49 - CUDA: manage NCCL communicators in context (#21891)

Commit
9 days ago
CUDA: manage NCCL communicators in context (#21891) * CUDA: manage NCCL communicators in context * add check that all backends are CUDA * remove unused vector, limit init to > 1 GPUs * fix warnings * fix cuda device, cache allreduce
Parents
Loading