llama.cpp
18e43766
- metal : fix flash attention kernel requirements (#7169)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
metal : fix flash attention kernel requirements (#7169) * metal : fix flash attention kernel requirements ggml-ci * metal : fix ggml_metal_supports_op ggml-ci
References
#7169 - metal : fix flash attention kernel requirements
Author
ggerganov
Parents
8c660242
Loading