llama.cpp
7c34655b
- cuda : use int instead of int64_t
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
cuda : use int instead of int64_t Noticeably improves performance (thanks to Johannes)
References
#5021 - ggml : add Flash Attention
Author
ggerganov
Parents
b150abe8
Loading