 e0dbec0bc6
			
		
	
	e0dbec0bc6
	
	
	
		
			
			* llama : refactor llama_context, llama_kv_cache, llm_build_context ggml-ci * graph : don't mutate the KV cache during defrag ggml-ci * context : reduce virtuals + remove test function ggml-ci * context : move interface implementation to source file + factory ggml-ci * graph : move KV cache build functions to llama_context impl ggml-ci * graph : remove model reference from build_pooling ggml-ci * graph : remove llama_model reference ggml-ci * kv_cache : provide rope factors ggml-ci * graph : rework inputs to use only unique_ptr, remove attn input abstraction ggml-ci * context : remove llama_context_i abstraction ggml-ci * context : clean-up ggml-ci * graph : clean-up ggml-ci * llama : remove redundant keywords (struct, enum) ggml-ci * model : adapt gemma3 ggml-ci * graph : restore same attention ops as on master ggml-ci * llama : remove TODO + fix indent ggml-ci
Server tests
Python based server tests scenario using pytest.
Tests target GitHub workflows job runners with 4 vCPU.
Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail.
To mitigate it, you can increase values in n_predict, kv_size.
Install dependencies
pip install -r requirements.txt
Run tests
- Build the server
cd ../../..
cmake -B build -DLLAMA_CURL=ON
cmake --build build --target llama-server
- Start the test: ./tests.sh
It's possible to override some scenario steps values with environment variables:
| variable | description | 
|---|---|
| PORT | context.server_portto set the listening port of the server during scenario, default:8080 | 
| LLAMA_SERVER_BIN_PATH | to change the server binary path, default: ../../../build/bin/llama-server | 
| DEBUG | to enable steps and server verbose mode --verbose | 
| N_GPU_LAYERS | number of model layers to offload to VRAM -ngl --n-gpu-layers | 
| LLAMA_CACHE | by default server tests re-download models to the tmpsubfolder. Set this to your cache (e.g.$HOME/Library/Caches/llama.cppon Mac or$HOME/.cache/llama.cppon Unix) to avoid this | 
To run slow tests (will download many models, make sure to set LLAMA_CACHE if needed):
SLOW_TESTS=1 ./tests.sh
To run with stdout/stderr display in real time (verbose output, but useful for debugging):
DEBUG=1 ./tests.sh -s -v -x
To run all the tests in a file:
./tests.sh unit/test_chat_completion.py -v -x
To run a single test:
./tests.sh unit/test_chat_completion.py::test_invalid_chat_completion_req
Hint: You can compile and run test in single command, useful for local developement:
cmake --build build -j --target llama-server && ./examples/server/tests/tests.sh
To see all available arguments, please refer to pytest documentation