This website requires JavaScript.
Explore
Help
Sign In
CS348Project
/
llama.cpp
Watch
5
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-11-17 11:37:10 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
e81b8e4b7f5ab870836fad26d154a7507b341b36
llama.cpp
/
ggml
/
src
/
ggml-cann
History
Chenguang Li
ef476916bb
CANN: FIx compiler warnings (
#15661
)
...
Signed-off-by: noemotiovon <
757486878@qq.com
>
2025-08-30 10:18:35 +08:00
..
acl_tensor.cpp
CANN: Implement GLU ops (
#14884
)
2025-07-26 17:56:18 +08:00
acl_tensor.h
CANN: Add the basic supports of Flash Attention kernel (
#13627
)
2025-05-26 10:20:18 +08:00
aclnn_ops.cpp
CANN: refactor mask handling and improve performance in FA (
#15561
)
2025-08-27 17:21:41 +08:00
aclnn_ops.h
CANN: Add ggml_set_rows (
#14943
)
2025-07-29 22:36:43 +08:00
CMakeLists.txt
CANN: add support for ACL Graph (
#15065
)
2025-08-06 14:12:42 +08:00
common.h
kv-cache : remove LLAMA_SET_ROWS checks (
#15505
)
2025-08-28 12:27:02 +03:00
Doxyfile
CANN: Add the basic supports of Flash Attention kernel (
#13627
)
2025-05-26 10:20:18 +08:00
ggml-cann.cpp
CANN: FIx compiler warnings (
#15661
)
2025-08-30 10:18:35 +08:00