llama.cpp
c556418b - llama-bench : use local GPUs along with RPC servers (#14917)

Commit
100 days ago
llama-bench : use local GPUs along with RPC servers (#14917) Currently if RPC servers are specified with '--rpc' and there is a local GPU available (e.g. CUDA), the benchmark will be performed only on the RPC device(s) but the backend result column will say "CUDA,RPC" which is incorrect. This patch is adding all local GPU devices and makes llama-bench consistent with llama-cli.
Author
Parents
Loading