This website requires JavaScript.
Explore
Help
Sign In
CS348Project
/
llama.cpp
Watch
5
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-11-01 09:01:57 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
sl/auto-flash-attn
Add File
New File
Upload File
Apply Patch
llama.cpp
/
ggml
History
slaren
afc4a7de65
llama : enable flash attn automatically when supported
2024-10-30 23:30:06 +01:00
..
cmake
llama : reorganize source code + improve CMake (
#8006
)
2024-06-26 18:33:02 +03:00
include
llama : refactor model loader with backend registry (
#10026
)
2024-10-30 02:01:23 +01:00
src
llama : enable flash attn automatically when supported
2024-10-30 23:30:06 +01:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
add amx kernel for gemm (
#8998
)
2024-10-18 13:34:36 +08:00