llama.cpp
llama: use FA + max. GPU layers by default
#15434
Merged

Commits
Loading