mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	 2002bc96bf
			
		
	
	2002bc96bf
	
	
	
		
			
			* server : refactoring (wip) * server : remove llava/clip objects from build * server : fix empty prompt handling + all slots idle logic * server : normalize id vars * server : code style * server : simplify model chat template validation * server : code style * server : minor * llama : llama_chat_apply_template support null buf * server : do not process embedding requests when disabled * server : reorganize structs and enums + naming fixes * server : merge oai.hpp in utils.hpp * server : refactor system prompt update at start * server : disable cached prompts with self-extend * server : do not process more than n_batch tokens per iter * server: tests: embeddings use a real embeddings model (#5908) * server, tests : bump batch to fit 1 embedding prompt * server: tests: embeddings fix build type Debug is randomly failing (#5911) * server: tests: embeddings, use different KV Cache size * server: tests: embeddings, fixed prompt do not exceed n_batch, increase embedding timeout, reduce number of concurrent embeddings * server: tests: embeddings, no need to wait for server idle as it can timout * server: refactor: clean up http code (#5912) * server : avoid n_available var ggml-ci * server: refactor: better http codes * server : simplify json parsing + add comment about t_last * server : rename server structs * server : allow to override FQDN in tests ggml-ci * server : add comments --------- Co-authored-by: Pierrick Hymbert <pierrick.hymbert@gmail.com>
		
			
				
	
	
		
			101 lines
		
	
	
		
			2.6 KiB
		
	
	
	
		
			Gherkin
		
	
	
	
	
	
			
		
		
	
	
			101 lines
		
	
	
		
			2.6 KiB
		
	
	
	
		
			Gherkin
		
	
	
	
	
	
| @llama.cpp
 | |
| @parallel
 | |
| Feature: Parallel
 | |
| 
 | |
|   Background: Server startup
 | |
|     Given a server listening on localhost:8080
 | |
|     And   a model file tinyllamas/stories260K.gguf from HF repo ggml-org/models
 | |
|     And   42 as server seed
 | |
|     And   512 as batch size
 | |
|     And   64 KV cache size
 | |
|     And   2 slots
 | |
|     And   continuous batching
 | |
|     Then  the server is starting
 | |
|     Then  the server is healthy
 | |
| 
 | |
|   Scenario Outline: Multi users completion
 | |
|     Given a prompt:
 | |
|       """
 | |
|       Write a very long story about AI.
 | |
|       """
 | |
|     And a prompt:
 | |
|       """
 | |
|       Write another very long music lyrics.
 | |
|       """
 | |
|     And <n_predict> max tokens to predict
 | |
|     Given concurrent completion requests
 | |
|     Then the server is busy
 | |
|     Then the server is idle
 | |
|     And  all slots are idle
 | |
|     Then all prompts are predicted with <n_predict> tokens
 | |
|     Examples:
 | |
|       | n_predict |
 | |
|       | 128       |
 | |
| 
 | |
|   Scenario Outline: Multi users OAI completions compatibility
 | |
|     Given a system prompt You are a writer.
 | |
|     And   a model tinyllama-2
 | |
|     Given a prompt:
 | |
|       """
 | |
|       Write a very long book.
 | |
|       """
 | |
|     And a prompt:
 | |
|       """
 | |
|       Write another a poem.
 | |
|       """
 | |
|     And <n_predict> max tokens to predict
 | |
|     And streaming is <streaming>
 | |
|     Given concurrent OAI completions requests
 | |
|     Then the server is busy
 | |
|     Then the server is idle
 | |
|     Then all prompts are predicted with <n_predict> tokens
 | |
|     Examples:
 | |
|       | streaming | n_predict |
 | |
|       | disabled  | 128       |
 | |
|       | enabled   | 64        |
 | |
| 
 | |
|   Scenario Outline: Multi users OAI completions compatibility no v1
 | |
|     Given a system prompt You are a writer.
 | |
|     And   a model tinyllama-2
 | |
|     Given a prompt:
 | |
|       """
 | |
|       Write a very long book.
 | |
|       """
 | |
|     And a prompt:
 | |
|       """
 | |
|       Write another a poem.
 | |
|       """
 | |
|     And <n_predict> max tokens to predict
 | |
|     And streaming is <streaming>
 | |
|     Given concurrent OAI completions requests no v1
 | |
|     Then the server is busy
 | |
|     Then the server is idle
 | |
|     Then all prompts are predicted with <n_predict> tokens
 | |
|     Examples:
 | |
|       | streaming | n_predict |
 | |
|       | disabled  | 128       |
 | |
|       | enabled   | 64        |
 | |
| 
 | |
|   Scenario:  Multi users with total number of tokens to predict exceeds the KV Cache size #3969
 | |
|     Given a prompt:
 | |
|       """
 | |
|       Write a very long story about AI.
 | |
|       """
 | |
|     And a prompt:
 | |
|       """
 | |
|       Write another very long music lyrics.
 | |
|       """
 | |
|     And a prompt:
 | |
|       """
 | |
|       Write a very long poem.
 | |
|       """
 | |
|     And a prompt:
 | |
|       """
 | |
|       Write a very long joke.
 | |
|       """
 | |
|     And 128 max tokens to predict
 | |
|     Given concurrent completion requests
 | |
|     Then the server is busy
 | |
|     Then the server is idle
 | |
|     Then all prompts are predicted
 |