llama.cpp
a5e7dbd6 - llama : validate special token ids are in range when loading GGUF model (#3635)

Comment changes are shownComment changes are hidden
Commit
1 year ago
llama : validate special token ids are in range when loading GGUF model (#3635) * Add validation for special token ids to llama.cpp Small optimization for llama_byte_to_token SPM mode * Fix BPE newline check, only I could break something so simple * Killll meeeeee * Account for GGUF_KEY_KEY only setting when the key exists * Minor code cleanups. * Fix convert.py error msg when added tokens are out of range * Make gguf SpecialVocab vocab size-aware Update conversion scripts accordingly * Avoid a string copy Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Author
Parents
  • File
    convert-baichuan-hf-to-gguf.py
  • File
    convert-bloom-hf-to-gguf.py
  • File
    convert-falcon-hf-to-gguf.py
  • File
    convert-gptneox-hf-to-gguf.py
  • File
    convert-llama-ggml-to-gguf.py
  • File
    convert-mpt-hf-to-gguf.py
  • File
    convert-refact-hf-to-gguf.py
  • File
    convert-starcoder-hf-to-gguf.py
  • File
    convert.py
  • gguf-py/gguf
    • File
      gguf.py
  • File
    llama.cpp