Logo
Explore Help
Sign In
CS348Project/llama.cpp
5
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-11-02 09:12:03 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
ad126479c25cf983a0f994a08ba0911cf49ed62b
llama.cpp/ggml
History
Jeff Bolz e308efda8e vulkan: in flash attention, bounds check against nem1 (don't rely on GGML_KQ_MASK_PAD) (#16316)
2025-10-03 10:33:08 +02:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094)
2025-08-07 13:45:41 +02:00
include
ggml webgpu: add support for soft_max, optimize rms_norm (#16357)
2025-10-02 11:00:31 -07:00
src
vulkan: in flash attention, bounds check against nem1 (don't rely on GGML_KQ_MASK_PAD) (#16316)
2025-10-03 10:33:08 +02:00
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
HIP: Disable ROCWMMA fattn on CDNA when compiled against ROCWMMA 2.0.0 (#16221)
2025-10-01 23:09:25 +02:00
Powered by Gitea Version: 1.25.0 Page: 212ms Template: 7ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API