Files
llama.cpp/ci/README-MUSA.md
Georgi Gerganov 28baac9c9f ci : migrate ggml ci to self-hosted runners (#16116)
* ci : migrate ggml ci to a self-hosted runners

* ci : add T4 runner

* ci : add instructions for adding self-hosted runners

* ci : disable test-backend-ops from debug builds due to slowness

* ci : add AMD V710 runner (vulkan)

* cont : add ROCM workflow

* ci : switch to qwen3 0.6b model

* cont : fix the context size
2025-09-21 16:50:45 +03:00

1.0 KiB

Running MUSA CI in a Docker Container

Assuming $PWD is the root of the llama.cpp repository, follow these steps to set up and run MUSA CI in a Docker container:

1. Create a local directory to store cached models, configuration files and venv:

mkdir -p $HOME/llama.cpp/ci-cache

2. Create a local directory to store CI run results:

mkdir -p $HOME/llama.cpp/ci-results

3. Start a Docker container and run the CI:

docker run --privileged -it \
    -v $HOME/llama.cpp/ci-cache:/ci-cache \
    -v $HOME/llama.cpp/ci-results:/ci-results \
    -v $PWD:/ws -w /ws \
    mthreads/musa:rc4.2.0-devel-ubuntu22.04-amd64

Inside the container, execute the following commands:

apt update -y && apt install -y bc cmake ccache git python3.10-venv time unzip wget
git config --global --add safe.directory /ws
GG_BUILD_MUSA=1 bash ./ci/run.sh /ci-results /ci-cache

This setup ensures that the CI runs within an isolated Docker environment while maintaining cached files and results across runs.