Logo
Explore Help
Sign In
CS348Project/llama.cpp
5
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-11-16 11:27:03 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
d80fb71f8b8bf69ec095ba281f8248d136d21c76
llama.cpp/ggml/src/ggml-cann
History
Dou Xinpeng 904837e0cb cann: fix crash when llama-bench is running on multiple cann devices (#9627)
2024-09-25 11:30:38 +08:00
..
kernels
cann: fix buffer_num and runtime speed slowly error (#8865)
2024-08-05 21:10:37 +08:00
.clang-format
[CANN] Add Ascend NPU backend (#6035)
2024-07-17 14:23:50 +03:00
acl_tensor.cpp
cann: support q4_0 model (#8822)
2024-08-05 12:22:30 +08:00
acl_tensor.h
cann: support q4_0 model (#8822)
2024-08-05 12:22:30 +08:00
aclnn_ops.cpp
ggml : move rope type enum to ggml.h (#8949)
2024-08-13 21:13:15 +02:00
aclnn_ops.h
[CANN] Add Ascend NPU backend (#6035)
2024-07-17 14:23:50 +03:00
common.h
cann: fix crash when llama-bench is running on multiple cann devices (#9627)
2024-09-25 11:30:38 +08:00
Doxyfile
cann : fix doxy (ggml/0)
2024-09-08 11:05:55 +03:00
Powered by Gitea Version: 1.25.1 Page: 2352ms Template: 228ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API