llama.cpp
b931f81b
- server : adjust spec tests to generate up to 16 tokens (#19093)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
2 days ago
server : adjust spec tests to generate up to 16 tokens (#19093)
References
#19093 - server : adjust spec tests to generate up to 16 tokens
Author
ggerganov
Parents
c5c64f72
Loading