llama.cpp
4893cc07
- server : fix crash when seq_rm fails for hybrid/recurrent models (#18391)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
29 days ago
server : fix crash when seq_rm fails for hybrid/recurrent models (#18391) * server : fix crash when seq_rm fails for hybrid/recurrent models * server : add allow_processing param to clear_slot
References
#18391 - server : fix crash when seq_rm fails for hybrid/recurrent models
Author
o7si
Parents
af3be131
Loading