llama.cpp
1f0dabda
- CUDA: use tensor cores for MMQ (#7676)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
CUDA: use tensor cores for MMQ (#7676) * CUDA: int8 tensor cores for MMQ (legacy quants) * fix out-of-bounds writes * __builtin_assume -> GGML_CUDA_ASSUME * fix writeback returning too early
References
#7676 - CUDA: use tensor cores for MMQ
Author
JohannesGaessler
Parents
af4ae502
Loading