This website requires JavaScript.
Explore
Help
Sign In
CS348Project
/
llama.cpp
Watch
5
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-11-14 11:07:10 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
3e1d29348b5d77269f6931500dd1c1a729d429c8
llama.cpp
/
ggml
/
src
/
ggml-opencl
History
lhez
97a20c012b
opencl: use
max_alloc_size
in backend ctx instead of querying again (
#12705
)
2025-04-02 17:01:42 -07:00
..
kernels
opencl: add multi and vision rope,
gelu_quick
and
im2col
(
#12600
)
2025-03-27 08:08:08 -07:00
CMakeLists.txt
opencl: add multi and vision rope,
gelu_quick
and
im2col
(
#12600
)
2025-03-27 08:08:08 -07:00
ggml-opencl.cpp
opencl: use
max_alloc_size
in backend ctx instead of querying again (
#12705
)
2025-04-02 17:01:42 -07:00