llama.cpp
4893cc07 - server : fix crash when seq_rm fails for hybrid/recurrent models (#18391)

Commit
29 days ago
server : fix crash when seq_rm fails for hybrid/recurrent models (#18391) * server : fix crash when seq_rm fails for hybrid/recurrent models * server : add allow_processing param to clear_slot
Author
Parents
Loading