This website requires JavaScript.
Explore
Help
Sign In
CS348Project
/
llama.cpp
Watch
5
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-11-18 11:46:58 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
a5334f911e095b7e4df2de497f626953080722b8
llama.cpp
/
ggml
History
Gabe Goodhart
a5334f911e
refactor: Compute block offsets once rather than once per token
...
Branch: GraniteFourPerf Signed-off-by: Gabe Goodhart <
ghart@us.ibm.com
>
2025-07-18 13:55:51 -06:00
..
cmake
ggml-cpu : rework weak alias on apple targets (
#14146
)
2025-06-16 13:54:15 +08:00
include
ggml: Add initial WebGPU backend (
#14521
)
2025-07-16 18:18:51 +03:00
src
refactor: Compute block offsets once rather than once per token
2025-07-18 13:55:51 -06:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml: Add initial WebGPU backend (
#14521
)
2025-07-16 18:18:51 +03:00