llama.cpp
1ad42b1f
- ggml : ggml_soft_max uses F16 mask
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
ggml : ggml_soft_max uses F16 mask
References
gg/flash-attn-mask-f16
Author
ggerganov
Committer
ggerganov
Parents
2ddc9bbe
Loading