Files
llama.cpp/ggml
Johannes Gäßler e81b8e4b7f llama: use FA + max. GPU layers by default (#15434)
* llama: use max. GPU layers by default, auto -fa

* ggml-backend: abort instead of segfault
2025-08-30 16:32:10 +02:00
..
2025-08-22 15:33:15 +02:00
2024-07-13 18:12:39 +02:00