llama.cpp
llama-bench : use two tokens in the warmup run for prompt evals
#3059
Merged

Loading