llama : add 18-layer model type for Gemma 3-270m (#15319)

This commit adds support for the 18-layer model type in the Gemma3
series, which is the size of the Gemma3-270m model.

The motivation for this commit is was the only change required for
Gemma3-270m to be converted to GGUF format and used with llama.cpp.

Once the model has been converted and uploaded to Huggingface it can be
used like this:
```console
$ ./build/bin/llama-cli -hf ggml-org/gemma-3-270m-GGUF:Q8_0
```
This commit is contained in:
Daniel Bevenius
2025-08-14 17:56:26 +02:00
committed by GitHub
parent e4e915912c
commit 7a0de96045
2 changed files with 2 additions and 0 deletions

View File

@@ -39,6 +39,7 @@ enum llm_type {
LLM_TYPE_410M,
LLM_TYPE_450M,
LLM_TYPE_475M,
LLM_TYPE_537M,
LLM_TYPE_700M,
LLM_TYPE_770M,
LLM_TYPE_780M,