llama.cpp
87f4102a
- llama : revert n_threads_batch logic
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Hide Minimap (CTRL+M)
Commit
1 year ago
llama : revert n_threads_batch logic ggml-ci
References
gg/fix-cpu-blas
#4240 - llama : improve batched CPU perf with BLAS
Author
ggerganov
Committer
ggerganov
Parents
e9b7a5cb
Files
1
llama.cpp
Loading