mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-11-03 09:22:01 +00:00
server : test script : add timeout for all requests (#9282)
This commit is contained in:
@@ -52,8 +52,8 @@ Feature: Parallel
|
||||
Then all prompts are predicted with <n_predict> tokens
|
||||
Examples:
|
||||
| streaming | n_predict |
|
||||
| disabled | 200 |
|
||||
| enabled | 200 |
|
||||
| disabled | 128 |
|
||||
| enabled | 64 |
|
||||
|
||||
Scenario Outline: Multi users OAI completions compatibility no v1
|
||||
Given a system prompt You are a writer.
|
||||
|
||||
Reference in New Issue
Block a user