This website requires JavaScript.
Explore
Help
Sign In
CS348Project
/
llama.cpp
Watch
5
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-11-13 10:57:15 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
1d660d2fae42ea2e1d3569638e722bf7a37b6b19
llama.cpp
/
ggml
/
src
/
ggml-cann
History
Jeff Bolz
c0b45097c3
rename optimize_graph to graph_optimize (
#16082
)
2025-09-18 13:46:17 -05:00
..
acl_tensor.cpp
CANN: Implement GLU ops (
#14884
)
2025-07-26 17:56:18 +08:00
acl_tensor.h
CANN: Add the basic supports of Flash Attention kernel (
#13627
)
2025-05-26 10:20:18 +08:00
aclnn_ops.cpp
CANN: Add ROPE sin/cos cache for reuse (
#15912
)
2025-09-10 18:42:00 +08:00
aclnn_ops.h
CANN: Add ggml_set_rows (
#14943
)
2025-07-29 22:36:43 +08:00
CMakeLists.txt
CANN: add support for ACL Graph (
#15065
)
2025-08-06 14:12:42 +08:00
common.h
CANN: Optimize ggml_cann_set_device (
#15935
)
2025-09-17 14:33:08 +08:00
Doxyfile
CANN: Add the basic supports of Flash Attention kernel (
#13627
)
2025-05-26 10:20:18 +08:00
ggml-cann.cpp
rename optimize_graph to graph_optimize (
#16082
)
2025-09-18 13:46:17 -05:00