llama.cpp
46876d2a - cuda : supports running on CPU for GGML_USE_CUBLAS=ON build (#3946)

Commit
1 year ago
cuda : supports running on CPU for GGML_USE_CUBLAS=ON build (#3946) * protyping the idea that supports running on CPU for a GGML_USE_CUBLAS=on build * doc: add comments to ggml_cublas_loaded() * fix defined(...)
Author
Parents
Loading