llama.cpp
38cf3d32
- server: add --models-memory-max parameter to allow dynamically unloading models when they exceed a memory size threshold
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
22 days ago
server: add --models-memory-max parameter to allow dynamically unloading models when they exceed a memory size threshold
Author
0cc4m
Committer
0cc4m
Parents
227ed28e
Loading