llama.cpp
8185710a - CUDA: use only 1 thread if fully offloaded (#2915)

Commit
1 year ago
CUDA: use only 1 thread if fully offloaded (#2915)
Parents
Loading