llama.cpp
d8b567d2
- llama_model_loader: fail if backend cannot allocate buffer
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
llama_model_loader: fail if backend cannot allocate buffer
References
#6187 - llama_model_loader: support multiple split/shard GGUFs
#4 - Hp/split/load model (test CI)
Author
phymbert
Parents
1c931f3d
Loading