llama.cpp
supports running on CPU for GGML_USE_CUBLAS=ON build
#3946
Merged

Commits
  • protyping the idea that supports running on CPU for a GGML_USE_CUBLAS=on build
    wsxiaoys committed 2 years ago
  • doc: add comments to ggml_cublas_loaded()
    wsxiaoys committed 2 years ago
  • fix defined(...)
    wsxiaoys committed 2 years ago
Loading