llama.cpp
52af7826
- cuda : new cublas gemm branch for multi-batch quantized src0
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
cuda : new cublas gemm branch for multi-batch quantized src0
References
#3776 - cuda : improve text-generation and batched decoding performance
Author
ggerganov
Parents
59d1232e
Loading