Files
llama.cpp/examples
Tobias Lütke 7a3895641c allow server to multithread
because web browsers send a lot of garbage requests we want the server
to multithread when serving 404s for favicon's etc. To avoid blowing up
llama we just take a mutex when it's invoked.
2023-07-04 09:14:49 -04:00
..
2023-06-29 06:15:15 -07:00
2023-06-26 20:57:59 +03:00
2023-07-04 09:14:49 -04:00
2023-06-26 20:57:59 +03:00
2023-03-29 20:21:09 +03:00
2023-03-25 21:51:41 +02:00