llama.cpp
f249c997
- llama : adapt to F16 KQ_pos
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
llama : adapt to F16 KQ_pos
References
gg/flash-attn-sync
#5021 - ggml : add Flash Attention
Author
ggerganov
Committer
ggerganov
Parents
31109ca0
Loading