llama.cpp
11ac9800
- llama : improve infill support and special token detection (#9798)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
341 days ago
llama : improve infill support and special token detection (#9798) * llama : improve infill support ggml-ci * llama : add more FIM token strings ggml-ci * server : update prompt on slot restore (#9800) * gguf : deprecate old FIM token KVs
References
#9798 - llama : improve infill support and special token detection
Author
ggerganov
Parents
943d20b4
Loading