llama.cpp
e11bd856 - CPU/CUDA: Gemma 2 FlashAttention support (#8542)

Commit
1 year ago
CPU/CUDA: Gemma 2 FlashAttention support (#8542) * CPU/CUDA: Gemma 2 FlashAttention support * apply logit_softcap to scale in kernel * disable logit softcapping tests on Metal * remove metal check
Parents
Loading