This website requires JavaScript.
Explore
Help
Sign In
CS348Project
/
llama.cpp
Watch
5
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-11-10 10:27:03 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
35266573b968e1c947b367782fb4b3eddbb4f3c0
llama.cpp
/
ggml
/
src
/
ggml-webgpu
History
Reese Levine
35266573b9
ggml webgpu: actually add softmax, fix rms_norm offset (
#16400
)
...
* implement soft_max * Fix soft_max data race * Temporary fix, wait on each submit
2025-10-04 20:59:31 -07:00
..
wgsl-shaders
ggml webgpu: actually add softmax, fix rms_norm offset (
#16400
)
2025-10-04 20:59:31 -07:00
CMakeLists.txt
ggml WebGPU: add support for quantization types (
#15440
)
2025-08-22 11:28:03 -07:00
ggml-webgpu.cpp
ggml webgpu: actually add softmax, fix rms_norm offset (
#16400
)
2025-10-04 20:59:31 -07:00