llama.cpp
978ba3d8
- Server: Don't ignore llama.cpp params (#8754)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Hide Minimap (CTRL+M)
Commit
321 days ago
Server: Don't ignore llama.cpp params (#8754) * Don't ignore llama.cpp params * Add fallback for max_tokens
References
#8754 - Server: Don't ignore llama.cpp params
Author
ardfork
Parents
ecf6b7f2
Files
2
examples/server
server.cpp
utils.hpp
Loading