llama.cpp
e11bd856
- CPU/CUDA: Gemma 2 FlashAttention support (#8542)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
CPU/CUDA: Gemma 2 FlashAttention support (#8542) * CPU/CUDA: Gemma 2 FlashAttention support * apply logit_softcap to scale in kernel * disable logit softcapping tests on Metal * remove metal check
References
#8542 - CPU/CUDA: Gemma 2 FlashAttention support
Author
JohannesGaessler
Parents
8f824ffe
Loading