llama.cpp
e81b8e4b
- llama: use FA + max. GPU layers by default (#15434)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
41 days ago
llama: use FA + max. GPU layers by default (#15434) * llama: use max. GPU layers by default, auto -fa * ggml-backend: abort instead of segfault
References
#15434 - llama: use FA + max. GPU layers by default
Author
JohannesGaessler
Parents
38ad381f
Loading