llama.cpp
llama: use FA + max. GPU layers by default
#15434
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
8
Changes
View On
GitHub
Commits
llama: use max. GPU layers by default, auto -fa
JohannesGaessler
committed
225 days ago
disable -fa for server test
JohannesGaessler
committed
225 days ago
remove redundant defaults
JohannesGaessler
committed
223 days ago
ggml-backend: abort instead of segfault
JohannesGaessler
committed
223 days ago
address review comments
JohannesGaessler
committed
222 days ago
add comment [no ci]
JohannesGaessler
committed
222 days ago
fix unittest, remove metal ifdef
JohannesGaessler
committed
222 days ago
add comment [no ci]
JohannesGaessler
committed
222 days ago
Loading