llama.cpp
a818f302
- CUDA: use MMQ instead of cuBLAS by default (#8075)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
CUDA: use MMQ instead of cuBLAS by default (#8075)
References
#8075 - CUDA: use MMQ instead of cuBLAS by default
Author
JohannesGaessler
Parents
d62e4aaa
Loading