Files
llama.cpp/examples
Johannes Gäßler e81b8e4b7f llama: use FA + max. GPU layers by default (#15434)
* llama: use max. GPU layers by default, auto -fa

* ggml-backend: abort instead of segfault
2025-08-30 16:32:10 +02:00
..
2023-03-29 20:21:09 +03:00