This website requires JavaScript.
Explore
Help
Sign In
CS348Project
/
llama.cpp
Watch
5
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-11-09 10:17:06 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
144a4ce824b6bd0e48d62009d10cae1daf5308db
llama.cpp
/
ggml
History
Jeff Bolz
f549b0007d
vulkan: Call ggml_vk_buffer_write_2d from ggml_vk_buffer_copy (
#16793
)
...
This lets the copy to the destination device use the host-visible vidmem optimization.
2025-10-29 09:53:04 +01:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (
#15094
)
2025-08-07 13:45:41 +02:00
include
Add experimental ggml-hexagon backend for the Hexagon NPU (
#16547
)
2025-10-22 13:47:09 -07:00
src
vulkan: Call ggml_vk_buffer_write_2d from ggml_vk_buffer_copy (
#16793
)
2025-10-29 09:53:04 +01:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
Add experimental ggml-hexagon backend for the Hexagon NPU (
#16547
)
2025-10-22 13:47:09 -07:00