llama.cpp
model: minicpm should use llm_build_granite
#13911
Merged

Loading