llama.cpp
server: stop generation at `n_ctx_train` if `n_predict` is not set
#6638
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
7
Changes
View On
GitHub
Commits
server: cap n_predict if not set to n_ctx_train
phymbert
committed
1 year ago
Merge remote-tracking branch 'refs/remotes/origin/master' into hp/server/avoid-infinite-loop
phymbert
committed
1 year ago
server: fix infinite loop
phymbert
committed
1 year ago
server: infinite loop, move in process_token
phymbert
committed
1 year ago
minor: spaces
phymbert
committed
1 year ago
minor: spaces
phymbert
committed
1 year ago
server: include prompt tokens in the EOS limit
phymbert
committed
1 year ago
Loading