llama.cpp
bug: Free the allocated tokens in the batch
#5252
Merged

Loading