This website requires JavaScript.
Explore
Help
Sign In
CS348Project
/
llama.cpp
Watch
5
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-10-27 08:21:30 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
28b5f190ef1dbea5edf82dbc8b4407b721fadd13
llama.cpp
/
ggml
/
src
/
ggml-musa
History
Johannes Gäßler
7a6e91ad26
CUDA: replace GGML_CUDA_F16 with CUDA arch checks (
#15433
)
2025-08-20 16:58:49 +02:00
..
CMakeLists.txt
CUDA: replace GGML_CUDA_F16 with CUDA arch checks (
#15433
)
2025-08-20 16:58:49 +02:00
mudnn.cu
musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy (
#13647
)
2025-05-21 09:58:49 +08:00
mudnn.cuh
musa: enable fp16 mma (all) and cublas on qy2 (
#13842
)
2025-06-26 12:11:59 +08:00