Files
llama.cpp/common
Johannes Gäßler e81b8e4b7f llama: use FA + max. GPU layers by default (#15434)
* llama: use max. GPU layers by default, auto -fa

* ggml-backend: abort instead of segfault
2025-08-30 16:32:10 +02:00
..
2025-05-30 16:25:45 +03:00
2025-05-30 16:25:45 +03:00