llama.cpp
46876d2a
- cuda : supports running on CPU for GGML_USE_CUBLAS=ON build (#3946)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
cuda : supports running on CPU for GGML_USE_CUBLAS=ON build (#3946) * protyping the idea that supports running on CPU for a GGML_USE_CUBLAS=on build * doc: add comments to ggml_cublas_loaded() * fix defined(...)
References
#3946 - supports running on CPU for GGML_USE_CUBLAS=ON build
Author
wsxiaoys
Parents
381efbf4
Loading