llama.cpp
server : fix crash when seq_rm fails for hybrid/recurrent models
#18391
Merged

server : fix crash when seq_rm fails for hybrid/recurrent models #18391

ngxson merged 2 commits into ggml-org:master from o7si:issue-18235
o7si
o7si server : fix crash when seq_rm fails for hybrid/recurrent models
f5468427
o7si o7si requested a review from ngxson ngxson 29 days ago
o7si o7si requested a review from ggerganov ggerganov 29 days ago
github-actions github-actions added examples
github-actions github-actions added server
ngxson
ngxson commented on 2025-12-26
o7si server : add allow_processing param to clear_slot
0e8829f1
ngxson
ngxson approved these changes on 2025-12-26
ngxson ngxson merged 4893cc07 into master 29 days ago

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone