Logo
Explore Help
Sign In
CS348Project/llama.cpp
5
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-11-13 10:57:15 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
12bbc3fa50b6df03318a4451c9a2210200a0b28d
llama.cpp/ggml
History
ai-fonsi 9d0882840e Disable CUDA host buffers on integrated GPUs (#16308)
2025-10-08 20:21:46 +02:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094)
2025-08-07 13:45:41 +02:00
include
rpc : add support for multiple devices (#16276)
2025-10-04 12:49:16 +03:00
src
Disable CUDA host buffers on integrated GPUs (#16308)
2025-10-08 20:21:46 +02:00
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml webgpu: profiling, CI updates, reworking of command submission (#16452)
2025-10-07 13:48:56 -07:00
Powered by Gitea Version: 1.25.1 Page: 2226ms Template: 92ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API