llama.cpp
a885dcff
- batched-bench : fix llama_synchronize usage during prompt processing (#15835)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
58 days ago
batched-bench : fix llama_synchronize usage during prompt processing (#15835) ggml-ci
References
#15835 - batched-bench : fix llama_synchronize usage during prompt processing
Author
ggerganov
Parents
663027fd
Loading