llama.cpp
supports running on CPU for GGML_USE_CUBLAS=ON build
#3946
Merged

Loading