llama.cpp
f9ca5dcb
- llama : avoid ggml_cast, use F32 query
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
llama : avoid ggml_cast, use F32 query
References
#5021 - ggml : add Flash Attention
Author
ggerganov
Parents
40ea8cd1
Loading