llama.cpp
defe2158
- CUDA: mul_mat_v support for batch sizes > 1 (#14262)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
92 days ago
CUDA: mul_mat_v support for batch sizes > 1 (#14262) * CUDA: mul_mat_v support for batch sizes > 1 * use 64 bit math for initial offset calculation
References
#14262 - CUDA: mul_mat_v support for batch sizes > 1
Author
JohannesGaessler
Parents
7b50d589
Loading