* requirements : update transformers/torch for Embedding Gemma
This commit updates the requirements to support converting
Embedding Gemma 300m models.
The motivation for this change is that during development I had a local
copy of the transformers package which is what I used for converting
the models. This was a mistake on my part and I should have also updated
my transformers version to the official release.
I had checked the requirements/requirements-convert_legacy_llama.txt
file and noted that the version was >=4.45.1,<5.0.0 and came to the
conculusion that no updated would be needed, this assumed that
Embedding Gemma would be in a transformers release at the time
Commit fb15d649ed ("llama : add support
for EmbeddingGemma 300m (#15798)) was merged. So anyone wanting to
convert themselves would be able to do so. However, Embedding Gemma is
a preview release and this commit updates the requirements to use this
preview release.
* resolve additional python dependencies
* fix pyright errors in tokenizer test and remove unused import
Multimodal Support in llama.cpp
This directory provides multimodal capabilities for llama.cpp. Initially intended as a showcase for running LLaVA models, its scope has expanded significantly over time to include various other vision-capable models. As a result, LLaVA is no longer the only multimodal architecture supported.
Important
Multimodal support can be viewed as a sub-project within
llama.cpp. It is under very heavy development, and breaking changes are expected.
The naming and structure related to multimodal support have evolved, which might cause some confusion. Here's a brief timeline to clarify:
- #3436: Initial support for LLaVA 1.5 was added, introducing
llava.cppandclip.cpp. Thellava-clibinary was created for model interaction. - #4954: Support for MobileVLM was added, becoming the second vision model supported. This built upon the existing
llava.cpp,clip.cpp, andllava-cliinfrastructure. - Expansion & Fragmentation: Many new models were subsequently added (e.g., #7599, #10361, #12344, and others). However,
llava-clilacked support for the increasingly complex chat templates required by these models. This led to the creation of model-specific binaries likeqwen2vl-cli,minicpmv-cli, andgemma3-cli. While functional, this proliferation of command-line tools became confusing for users. - #12849:
libmtmdwas introduced as a replacement forllava.cpp. Its goals include providing a single, unified command-line interface, improving the user/developer experience (UX/DX), and supporting both audio and image inputs. - #13012:
mtmd-cliwas added, consolidating the various model-specific CLIs into a single tool powered bylibmtmd.
Pre-quantized models
See the list of pre-quantized model here
How it works and what is mmproj?
Multimodal support in llama.cpp works by encoding images into embeddings using a separate model component, and then feeding these embeddings into the language model.
This approach keeps the multimodal components distinct from the core libllama library. Separating these allows for faster, independent development cycles. While many modern vision models are based on Vision Transformers (ViTs), their specific pre-processing and projection steps can vary significantly. Integrating this diverse complexity directly into libllama is currently challenging.
Consequently, running a multimodal model typically requires two GGUF files:
- The standard language model file.
- A corresponding multimodal projector (
mmproj) file, which handles the image encoding and projection.
What is libmtmd?
As outlined in the history, libmtmd is the modern library designed to replace the original llava.cpp implementation for handling multimodal inputs.
Built upon clip.cpp (similar to llava.cpp), libmtmd offers several advantages:
- Unified Interface: Aims to consolidate interaction for various multimodal models.
- Improved UX/DX: Features a more intuitive API, inspired by the
Processorclass in the Hugging Facetransformerslibrary. - Flexibility: Designed to support multiple input types (text, audio, images) while respecting the wide variety of chat templates used by different models.
How to obtain mmproj
Multimodal projector (mmproj) files are specific to each model architecture.
For the following models, you can use convert_hf_to_gguf.py with --mmproj flag to get the mmproj file:
- Gemma 3 ; See the guide here - Note: 1B variant does not have vision support
- SmolVLM (from HuggingFaceTB)
- SmolVLM2 (from HuggingFaceTB)
- Pixtral 12B - only works with
transformers-compatible checkpoint - Qwen 2 VL and Qwen 2.5 VL (from Qwen)
- Mistral Small 3.1 24B
- InternVL 2.5 and InternVL 3 from OpenGVLab (note: we don't support conversion of
InternVL3-*-hfmodel, only non-HF version is supported ;InternLM2Modeltext model is not supported)
For older models, please refer to the relevant guide for instructions on how to obtain or create them:
NOTE: conversion scripts are located under tools/mtmd/legacy-models