llama.cpp
CUDA: mul_mat_v support for batch sizes > 1
#14262
Merged

Loading