llama.cpp
a8942790
- Add custom kq scaling from Gemma2Attention
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
Add custom kq scaling from Gemma2Attention
References
#8197 - Add attention and final logit soft-capping, update scaling factor to Gemma2
Author
abetlen
Parents
6f2464e3
Loading