llama.cpp
c262bedd - CUDA: Prefer vector flash decoding kernel for Gemma models (#12738)

Commit
84 days ago
CUDA: Prefer vector flash decoding kernel for Gemma models (#12738) * Prefer vector flash decoding kernel for Gemma models Vector flash decoding kernel was not being picked for models with head dimension 256. Gemma models are in this category. Removing this limit improves e2e performance by upto 12% in gen phase throughput for Gemm models. * Update ggml/src/ggml-cuda/fattn.cu Co-authored-by: Johannes Gäßler <johannesg@5d6.de> --------- Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Author
Parents
  • ggml/src/ggml-cuda
    • File
      fattn.cu
Loading