llama.cpp
8af1f5f4 - ggml-hexagon: flash-attn opt (#19025)

Commit
4 days ago
ggml-hexagon: flash-attn opt (#19025) * optimize flash attention kernel by improving score computation and online softmax update * wip * Refactor online softmax update in flash attention kernel for improved performance * Optimize flash attention kernel by replacing float array with HVX_Vector for score computation * wip
Author
Parents
Loading