Files
llama.cpp/examples/model-conversion/scripts

---
base_model:
- {base_model}
---
# {model_name} GGUF

Recommended way to run this model:

```sh
llama-server -hf {namespace}/{model_name}-GGUF -c 0 -fa
```

Then, access http://localhost:8080