llama.cpp
75cd4c77 - ci: bench: support sse and fix prompt processing time / server: add tokens usage in stream OAI response (#6495)

Commit
2 years ago
ci: bench: support sse and fix prompt processing time / server: add tokens usage in stream OAI response (#6495) * ci: bench: support sse and fix prompt processing time server: add tokens usage in stream mode * ci: bench: README.md EOL * ci: bench: remove total pp and tg as it is not accurate * ci: bench: fix case when there is no token generated * ci: bench: change to the 95 percentile for pp and tg as it is closer to what the server exports in metrics * ci: bench: fix finish reason rate
Author
Parents
Loading