llama.cpp
Use model->gguf_kv for loading the template instead of using the C API.
#10868
Merged

Use model->gguf_kv for loading the template instead of using the C API. #10868

slaren merged 2 commits into ggml-org:master from dranger003:master
dranger003
dranger003 Bump model_template to 16384 bytes to support larger chat templates.
919fe432
dranger003 dranger003 force pushed from 4a7f1f79 to 919fe432 269 days ago
slaren
dranger003 Use `model->gguf_kv` for efficiency.
52bfa235
dranger003 dranger003 changed the title Bump model_template to 16384 bytes to support larger chat templates. Use model->gguf_kv for loading the template instead of using the C API. 269 days ago
slaren
slaren approved these changes on 2024-12-17
slaren slaren merged d62b532c into master 268 days ago

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone