llama.cpp
cmake: don't fail on `GGML_CPU=OFF`
#11457
Merged

Loading