mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	Server tests
Python based server tests scenario using BDD and behave:
- issues.feature Pending issues scenario
- parallel.feature Scenario involving multi slots and concurrent requests
- security.feature Security, CORS and API Key
- server.feature Server base scenario: completion, embedding, tokenization, etc...
Tests target GitHub workflows job runners with 4 vCPU.
Requests are using aiohttp, asyncio based http client.
Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail. To mitigate it, you can increase values in n_predict, kv_size.
Install dependencies
pip install -r requirements.txt
Run tests
- Build the server
cd ../../..
mkdir build
cd build
cmake ../
cmake --build . --target server
- download required models:
- ../../../scripts/hf.sh --repo ggml-org/models --file tinyllamas/stories260K.gguf
 
- Start the test: ./tests.sh
It's possible to override some scenario steps values with environment variables:
- PORT->- context.server_portto set the listening port of the server during scenario, default:- 8080
- LLAMA_SERVER_BIN_PATH-> to change the server binary path, default:- ../../../build/bin/server
- DEBUG-> "ON" to enable steps and server verbose mode- --verbose
- SERVER_LOG_FORMAT_JSON-> if set switch server logs to json format
Run @bug, @wip or @wrong_usage annotated scenario
Feature or Scenario must be annotated with @llama.cpp to be included in the default scope.
- @bugannotation aims to link a scenario with a GitHub issue.
- @wrong_usageare meant to show user issue that are actually an expected behavior
- @wipto focus on a scenario working in progress
To run a scenario annotated with @bug, start:
DEBUG=ON ./tests.sh --no-skipped --tags bug
After changing logic in steps.py, ensure that @bug and @wrong_usage scenario are updated.
