llama.cpp
d62b532c
- Use model->gguf_kv for loading the template instead of using the C API. (#10868)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
268 days ago
Use model->gguf_kv for loading the template instead of using the C API. (#10868) * Bump model_template to 16384 bytes to support larger chat templates. * Use `model->gguf_kv` for efficiency.
References
#10868 - Use model->gguf_kv for loading the template instead of using the C API.
Author
dranger003
Parents
081b29bd
Loading