llama.cpp
77d08f32
- metal : parallelize across KV size
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
metal : parallelize across KV size
References
#5021 - ggml : add Flash Attention
Author
ggerganov
Committer
ggerganov
Parents
a4b6341c
Loading