llama.cpp
ggml : do not define GGML_USE_CUDA when building with GGML_BACKEND_DL
#11211
Merged

Loading