llama.cpp
a818f302 - CUDA: use MMQ instead of cuBLAS by default (#8075)

Commit
1 year ago
CUDA: use MMQ instead of cuBLAS by default (#8075)
Parents
Loading