llama.cpp
server : avoid breaking KV cache when prompt >= n_ctx (#6855)
#8359
Closed

Loading