llama.cpp
428772f3
- use no_alloc to get memory requirements for model load
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 day ago
use no_alloc to get memory requirements for model load
References
#21231 - server: add router device memory margin parameter for dynamic unloading
Author
0cc4m
Committer
0cc4m
Parents
ae2094a3
Loading