llama.cpp
36feaeb4
- ci : enable LLAMA_CUBLAS=1 for CUDA nodes
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
ci : enable LLAMA_CUBLAS=1 for CUDA nodes ggml-ci
References
#4990 - ggml : add IQ2 to test-backend-ops + refactoring
Author
ggerganov
Parents
e9a5d54b
Loading