llama.cpp
11ac9800 - llama : improve infill support and special token detection (#9798)

Commit
261 days ago
llama : improve infill support and special token detection (#9798) * llama : improve infill support ggml-ci * llama : add more FIM token strings ggml-ci * server : update prompt on slot restore (#9800) * gguf : deprecate old FIM token KVs
Author
Parents
  • common
    • File
      arg.cpp
    • File
      common.cpp
    • File
      common.h
  • examples
    • infill
      • File
        infill.cpp
    • server
      • File
        README.md
      • File
        server.cpp
  • gguf-py/gguf
    • File
      constants.py
    • File
      gguf_writer.py
  • include
    • File
      llama.h
  • src
    • File
      llama-vocab.cpp
    • File
      llama-vocab.h
    • File
      llama.cpp