llama.cpp
fix(gguf-py): special tokens are no longer skipped when add_<token>_token is set to false
#5487
Merged

Loading