llama.cpp
supports running on CPU for GGML_USE_CUBLAS=ON build
#3946
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
3
Changes
View On
GitHub
Commits
protyping the idea that supports running on CPU for a GGML_USE_CUBLAS=on build
wsxiaoys
committed
2 years ago
doc: add comments to ggml_cublas_loaded()
wsxiaoys
committed
2 years ago
fix defined(...)
wsxiaoys
committed
2 years ago
Loading