llama.cpp
6be02b59
- cuda : fix build
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
cuda : fix build
References
gg/flash-attn-wip
#5021 - ggml : add Flash Attention
Author
ggerganov
Parents
013721df
Loading