This website requires JavaScript.
Explore
Help
Sign In
CS348Project
/
llama.cpp
Watch
5
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-11-12 10:47:01 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
074c4fd39df3af974e10fbee6cc6db5d9304655b
llama.cpp
/
ggml
/
src
/
ggml-vulkan
History
cmdr2
0cbee131ad
cuda/vulkan: specify fp32-only support for some operations in supports_op (ggml/1129)
...
ggml-ci
2025-03-03 18:18:11 +02:00
..
cmake
fix: ggml: fix vulkan-shaders-gen build (
#10448
)
2025-01-15 14:17:42 +01:00
vulkan-shaders
vulkan: add specific MMV kernels for IQ2 and IQ3 quants + optimizations (
#11595
)
2025-02-28 09:42:52 +01:00
CMakeLists.txt
fix: ggml: fix vulkan-shaders-gen build (
#10448
)
2025-01-15 14:17:42 +01:00
ggml-vulkan.cpp
cuda/vulkan: specify fp32-only support for some operations in supports_op (ggml/1129)
2025-03-03 18:18:11 +02:00