llama.cpp
777f42ba - Improve handling of special tokens in GGML to GGUF converter (#2725)

Commit
2 years ago
Improve handling of special tokens in GGML to GGUF converter (#2725) * Improve UNK, BOS, EOS token handling when converting without metadata. * Allow importing as a module. * Remove some obsolete code and minor cleanups. * Set default UNK token mapping from -1 to 0 in llama.cpp * Try to handle overflow due to buggy Windows Python with a better error message
Author
Parents
Loading