mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-10-27 08:21:30 +00:00
devops: add s390x containers (#15915)
* devops: add s390x dockerfile Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: add missing ninja Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: move s390x docker into cpu docker Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: rework s390x docker Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: copy more tools Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: add server build step Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: remove apt clean steps as distroless misses it Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: remove apt commands from distroless Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: fix shared libs in distroless Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: use correct libs path Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: fix shared libs Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: add collector stage Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: fix missing stage ref Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: fix permission issue Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: fix unknown model loading failures Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: attempt at fixing model loading failure Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: fix missing ggml shared object failure to load model Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: remove move shared objects Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: move libggml-cpu and blas into bin Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: finalise hardened server stage Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: add cli target Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: fix typos Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: fix missing shared libraries in base Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: update debian target Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: formalise llama.cpp loc Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * Revert "devops: formalise llama.cpp loc" This reverts commit0a7664af84. Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: formalise llama.cpp loc Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> (cherry picked from commit0a7664af84) Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: attempt at fixing missing dir Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: attempt at making it cache the build Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: fix copying process Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: make build dir an argument Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * Revert "devops: make build dir an argument" This reverts commit438698976b. Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: add build stage for gguf-py Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: move gguf-py installation into build stage Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: break system packages? Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: add rust compiler installer Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: fix rustc not found Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: remove cache mount to allow rustc to persist Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: move rustc installation to another layer Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: move gguf-py installation to full stage, fix copying Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: remove rustc installation in build Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: disable full target for now Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: attempting static build Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: merge s390x dockerfile into cpu for now Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: switch to gcc image for build step Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: remove build essentials Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: install openblas into base target Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: go back to s390x dockerfile Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: remove libggml and libblas Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: add full target Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: add break system packages Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: add libjpeg Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: add missing cmake dep Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: finalise docker images for s390x Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: add custom openblas patch Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: use libopenblas-dev instead of libopenblas-openmp-dev Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> * devops: add s390x docker build Signed-off-by: Aaron Teo <aaron.teo1@ibm.com> --------- Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
This commit is contained in:
122
.devops/s390x.Dockerfile
Normal file
122
.devops/s390x.Dockerfile
Normal file
@@ -0,0 +1,122 @@
|
||||
ARG GCC_VERSION=15.2.0
|
||||
ARG UBUNTU_VERSION=24.04
|
||||
|
||||
### Build Llama.cpp stage
|
||||
FROM --platform=linux/s390x gcc:${GCC_VERSION} AS build
|
||||
|
||||
RUN --mount=type=cache,target=/var/cache/apt \
|
||||
--mount=type=cache,target=/var/lib/apt/lists \
|
||||
apt update -y && \
|
||||
apt upgrade -y && \
|
||||
apt install -y --no-install-recommends \
|
||||
git cmake ccache ninja-build \
|
||||
# WARNING: Do not use libopenblas-openmp-dev. libopenblas-dev is faster.
|
||||
libopenblas-dev libcurl4-openssl-dev && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /app
|
||||
COPY . .
|
||||
|
||||
RUN --mount=type=cache,target=/root/.ccache \
|
||||
--mount=type=cache,target=/app/build \
|
||||
cmake -S . -B build -G Ninja \
|
||||
-DCMAKE_BUILD_TYPE=Release \
|
||||
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
|
||||
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
|
||||
-DLLAMA_BUILD_TESTS=OFF \
|
||||
-DGGML_BACKEND_DL=OFF \
|
||||
-DGGML_NATIVE=OFF \
|
||||
-DGGML_BLAS=ON \
|
||||
-DGGML_BLAS_VENDOR=OpenBLAS && \
|
||||
cmake --build build --config Release -j $(nproc) && \
|
||||
cmake --install build --prefix /opt/llama.cpp
|
||||
|
||||
COPY *.py /opt/llama.cpp/bin
|
||||
COPY .devops/tools.sh /opt/llama.cpp/bin
|
||||
|
||||
COPY gguf-py /opt/llama.cpp/gguf-py
|
||||
COPY requirements.txt /opt/llama.cpp/gguf-py
|
||||
COPY requirements /opt/llama.cpp/gguf-py/requirements
|
||||
|
||||
|
||||
### Collect all llama.cpp binaries, libraries and distro libraries
|
||||
FROM --platform=linux/s390x scratch AS collector
|
||||
|
||||
# Copy llama.cpp binaries and libraries
|
||||
COPY --from=build /opt/llama.cpp/bin /llama.cpp/bin
|
||||
COPY --from=build /opt/llama.cpp/lib /llama.cpp/lib
|
||||
COPY --from=build /opt/llama.cpp/gguf-py /llama.cpp/gguf-py
|
||||
|
||||
|
||||
### Base image
|
||||
FROM --platform=linux/s390x ubuntu:${UBUNTU_VERSION} AS base
|
||||
|
||||
RUN --mount=type=cache,target=/var/cache/apt \
|
||||
--mount=type=cache,target=/var/lib/apt/lists \
|
||||
apt update -y && \
|
||||
apt install -y --no-install-recommends \
|
||||
# WARNING: Do not use libopenblas-openmp-dev. libopenblas-dev is faster.
|
||||
curl libgomp1 libopenblas-dev && \
|
||||
apt autoremove -y && \
|
||||
apt clean -y && \
|
||||
rm -rf /tmp/* /var/tmp/* && \
|
||||
find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete && \
|
||||
find /var/cache -type f -delete
|
||||
|
||||
# Copy llama.cpp libraries
|
||||
COPY --from=collector /llama.cpp/lib /usr/lib/s390x-linux-gnu
|
||||
|
||||
|
||||
### Full
|
||||
FROM --platform=linux/s390x base AS full
|
||||
|
||||
ENV PATH="/root/.cargo/bin:${PATH}"
|
||||
WORKDIR /app
|
||||
|
||||
RUN --mount=type=cache,target=/var/cache/apt \
|
||||
--mount=type=cache,target=/var/lib/apt/lists \
|
||||
apt update -y && \
|
||||
apt install -y \
|
||||
git cmake libjpeg-dev \
|
||||
python3 python3-pip python3-dev && \
|
||||
apt autoremove -y && \
|
||||
apt clean -y && \
|
||||
rm -rf /tmp/* /var/tmp/* && \
|
||||
find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete && \
|
||||
find /var/cache -type f -delete
|
||||
|
||||
RUN curl https://sh.rustup.rs -sSf | bash -s -- -y
|
||||
|
||||
COPY --from=collector /llama.cpp/bin /app
|
||||
COPY --from=collector /llama.cpp/gguf-py /app/gguf-py
|
||||
|
||||
RUN pip install --no-cache-dir --break-system-packages \
|
||||
-r /app/gguf-py/requirements.txt
|
||||
|
||||
ENTRYPOINT [ "/app/tools.sh" ]
|
||||
|
||||
|
||||
### CLI Only
|
||||
FROM --platform=linux/s390x base AS light
|
||||
|
||||
WORKDIR /llama.cpp/bin
|
||||
|
||||
# Copy llama.cpp binaries and libraries
|
||||
COPY --from=collector /llama.cpp/bin/llama-cli /llama.cpp/bin
|
||||
|
||||
ENTRYPOINT [ "/llama.cpp/bin/llama-cli" ]
|
||||
|
||||
|
||||
### Server
|
||||
FROM --platform=linux/s390x base AS server
|
||||
|
||||
ENV LLAMA_ARG_HOST=0.0.0.0
|
||||
|
||||
WORKDIR /llama.cpp/bin
|
||||
|
||||
# Copy llama.cpp binaries and libraries
|
||||
COPY --from=collector /llama.cpp/bin/llama-server /llama.cpp/bin
|
||||
|
||||
EXPOSE 8080
|
||||
|
||||
ENTRYPOINT [ "/llama.cpp/bin/llama-server" ]
|
||||
1
.github/workflows/docker.yml
vendored
1
.github/workflows/docker.yml
vendored
@@ -44,6 +44,7 @@ jobs:
|
||||
- { tag: "musa", dockerfile: ".devops/musa.Dockerfile", platforms: "linux/amd64", full: true, light: true, server: true, free_disk_space: true }
|
||||
- { tag: "intel", dockerfile: ".devops/intel.Dockerfile", platforms: "linux/amd64", full: true, light: true, server: true, free_disk_space: true }
|
||||
- { tag: "vulkan", dockerfile: ".devops/vulkan.Dockerfile", platforms: "linux/amd64", full: true, light: true, server: true, free_disk_space: false }
|
||||
- { tag: "s390x", dockerfile: ".devops/s390x.Dockerfile", platforms: "linux/s390x", full: true, light: true, server: true, free_disk_space: false }
|
||||
# Note: the rocm images are failing due to a compiler error and are disabled until this is fixed to allow the workflow to complete
|
||||
#- {tag: "rocm", dockerfile: ".devops/rocm.Dockerfile", platforms: "linux/amd64,linux/arm64", full: true, light: true, server: true, free_disk_space: true }
|
||||
steps:
|
||||
|
||||
Reference in New Issue
Block a user