Files
llama.cpp/convert_hf_to_gguf.py
Sigbjørn Skjæret 4d196981d4 convert : force patch_embd weights to F16 or F32 to avoid broken GGUFs (#15367)
* force patch_embd weights to f32

* use MmprojModel base tensor_force_quant instead
2025-08-17 14:47:42 +02:00

400 KiB
Executable File