llama.cpp
f4424c15
- Disable flash attention for Gemma2
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
Disable flash attention for Gemma2
References
#8197 - Add attention and final logit soft-capping, update scaling factor to Gemma2
Author
abetlen
Parents
d1137c20
Loading