llama.cpp
3fec68be - convert : add support of codeqwen due to tokenizer (#6707)

Comment changes are shownComment changes are hidden
Commit
1 year ago
convert : add support of codeqwen due to tokenizer (#6707) * add support of codeqwen due to tokenizer * override load_hparams * fix typo * fix load_params * convert : fix whitespace --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Author
Parents
  • File
    convert-hf-to-gguf.py