llama.cpp
41318d70
- llama : use the same threshold for OpenBLAS and ggml thread limiting (#577)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
2 years ago
llama : use the same threshold for OpenBLAS and ggml thread limiting (#577)
References
#577 - Use the same batch size threshold for enabling OpenBLAS and disabling ggml threading
Author
Piezoid
Parents
a6956b25
Loading