llama.cpp
9c42b171
- CUDA: fix logic for V100 + GGML_CUDA_FORCE_MMQ (#12098)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
253 days ago
CUDA: fix logic for V100 + GGML_CUDA_FORCE_MMQ (#12098)
References
#12098 - CUDA: fix logic for V100 + GGML_CUDA_FORCE_MMQ
Author
JohannesGaessler
Parents
05e6f5aa
Loading