llama.cpp
Add support for batch size to `--perplexity`
#407
Merged

Loading