llama.cpp
a4e15a36
- cuda : add CUDA_USE_TENSOR_CORES and GGML_CUDA_FORCE_MMQ macros
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
cuda : add CUDA_USE_TENSOR_CORES and GGML_CUDA_FORCE_MMQ macros
References
#3776 - cuda : improve text-generation and batched decoding performance
Author
ggerganov
Committer
ggerganov
Parents
4c6744b5
Loading