mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-29 08:41:22 +00:00 
			
		
		
		
	 9ba399dfa7
			
		
	
	9ba399dfa7
	
	
	
		
			
			* add support for base64 * fix base64 test * improve test --------- Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Server tests
Python based server tests scenario using pytest.
Tests target GitHub workflows job runners with 4 vCPU.
Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail.
To mitigate it, you can increase values in n_predict, kv_size.
Install dependencies
pip install -r requirements.txt
Run tests
- Build the server
cd ../../..
cmake -B build -DLLAMA_CURL=ON
cmake --build build --target llama-server
- Start the test: ./tests.sh
It's possible to override some scenario steps values with environment variables:
| variable | description | 
|---|---|
| PORT | context.server_portto set the listening port of the server during scenario, default:8080 | 
| LLAMA_SERVER_BIN_PATH | to change the server binary path, default: ../../../build/bin/llama-server | 
| DEBUG | to enable steps and server verbose mode --verbose | 
| N_GPU_LAYERS | number of model layers to offload to VRAM -ngl --n-gpu-layers | 
To run slow tests:
SLOW_TESTS=1 ./tests.sh
To run with stdout/stderr display in real time (verbose output, but useful for debugging):
DEBUG=1 ./tests.sh -s -v -x
Hint: You can compile and run test in single command, useful for local developement:
cmake --build build -j --target llama-server && ./examples/server/tests/tests.sh
To see all available arguments, please refer to pytest documentation