llama.cpp
common: ensure token addition to batch does not exceed llama_batch size
#9668
Merged

Loading