llama.cpp
server : enable continuous batching by default
#6231
Merged

Loading