Files
llama.cpp/scripts/tool_bench.py
Johannes Gäßler e81b8e4b7f llama: use FA + max. GPU layers by default (#15434)
* llama: use max. GPU layers by default, auto -fa

* ggml-backend: abort instead of segfault
2025-08-30 16:32:10 +02:00

15 KiB
Executable File