llama.cpp
server : fix incorrect num_tokens_predicted
#3480
Merged

Loading