llama.cpp
e81b8e4b - llama: use FA + max. GPU layers by default (#15434)

Commit
41 days ago
llama: use FA + max. GPU layers by default (#15434) * llama: use max. GPU layers by default, auto -fa * ggml-backend: abort instead of segfault
Parents
Loading