llama.cpp
support loading vocab from fast tokenizer config in convert.py
#3633
Merged

Loading