llama.cpp
5a19a9f6
- cuda : add flash_attn kernel (wip)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
cuda : add flash_attn kernel (wip)
References
#5021 - ggml : add Flash Attention
Author
ggerganov
Committer
ggerganov
Parents
2e460137
Loading