llama.cpp
68e210b3
- server : enable continuous batching by default (#6231)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
server : enable continuous batching by default (#6231)
References
#6231 - server : enable continuous batching by default
Author
ggerganov
Parents
b3e94f26
Loading