llama.cpp
428772f3
- use no_alloc to get memory requirements for model load
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
32 days ago
use no_alloc to get memory requirements for model load
Author
0cc4m
Committer
0cc4m
Parents
ae2094a3
Loading