llama.cpp
context : round n_tokens to next multiple of n_seqs when reserving
#14140
Merged

Loading