llama.cpp
1c931f3d
- Handle optional tensors
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
Handle optional tensors Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
References
#6187 - llama_model_loader: support multiple split/shard GGUFs
#4 - Hp/split/load model (test CI)
Author
phymbert
Parents
c34a5dee
Loading