Files
llama.cpp/ggml-backend.c
Michael Podvitskiy 4633d93af0 ggml : add abort_callback for cpu backend (ggml/725)
* a way to use abort_callback with the cpu backend

* whisper update
2024-02-10 09:29:21 +02:00

66 KiB