llama.cpp
e39eba26
- read n_ctx back after making llama_context (#21939)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
14 days ago
read n_ctx back after making llama_context (#21939)
References
#21939 - llama-diffusion-cli: read n_ctx back after making llama_context so the cli doesn't reject all input without -c
Author
smashedpumpkin
Parents
5d14e5d1
Loading