llama.cpp
b957b8f5
- cuda : add flash_attn kernel (wip)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
cuda : add flash_attn kernel (wip)
References
gg/flash-attn-cuda
Author
ggerganov
Committer
ggerganov
Parents
2e460137
Loading