llama.cpp
11ac9800
- llama : improve infill support and special token detection (#9798)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Hide Minimap (CTRL+M)
Commit
261 days ago
llama : improve infill support and special token detection (#9798) * llama : improve infill support ggml-ci * llama : add more FIM token strings ggml-ci * server : update prompt on slot restore (#9800) * gguf : deprecate old FIM token KVs
References
#9798 - llama : improve infill support and special token detection
Author
ggerganov
Parents
943d20b4
Files
12
common
arg.cpp
common.cpp
common.h
examples
infill
infill.cpp
server
README.md
server.cpp
gguf-py/gguf
constants.py
gguf_writer.py
include
llama.h
src
llama-vocab.cpp
llama-vocab.h
llama.cpp
Loading