Logo
Explore Help
Sign In
CS348Project/llama.cpp
5
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-11-19 11:57:07 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
66625a59a54d0a7504eda4c4e83abfcd83ba1cf8
llama.cpp/ggml
History
lhez 6e6725459a opencl: add mul_mat_f32_f32_l4_lm and mul_mat_f16_f32_l4_lm (#14809)
2025-07-30 14:56:55 -07:00
..
cmake
cmake : Fix BLAS link interface (ggml/1316)
2025-07-30 17:33:11 +03:00
include
ggml: Add initial WebGPU backend (#14521)
2025-07-16 18:18:51 +03:00
src
opencl: add mul_mat_f32_f32_l4_lm and mul_mat_f16_f32_l4_lm (#14809)
2025-07-30 14:56:55 -07:00
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
HIP: add GGML_HIP_MMQ_MFMA option to allow disableing the MFMA path. (#14930)
2025-07-29 17:44:30 +02:00
Powered by Gitea Version: 1.25.1 Page: 863ms Template: 34ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API