Files
llama.cpp/examples/server/bench/script.js
Pierrick Hymbert 2f0ee84b9b server: bench: minor fixes (#10765)
* server/bench:
- support openAI streaming standard output with [DONE]\n\n
- export k6 raw results in csv
- fix too many tcp idle connection in tcp_wait
- add metric time to emit first token

* server/bench:
- fix when prometheus not started
- wait for server to be ready before starting bench
2025-01-02 18:06:12 +01:00

6.2 KiB