This website requires JavaScript.
Explore
Help
Sign In
CS348Project
/
llama.cpp
Watch
5
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-11-14 11:07:10 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
4a5b8aff40277071dbb98e81b5d0cbbbd3c37283
llama.cpp
/
scripts
History
Georgi Gerganov
cdabeb2c27
sync : ggml
2025-11-05 10:41:51 +02:00
..
apple
…
jinja
scripts : add Jinja tester PySide6 simple app (
#15756
)
2025-09-05 01:05:12 +02:00
snapdragon
Hexagon Op queue & dispatch optimizations (
#16820
)
2025-10-29 06:29:12 -07:00
bench-models.sh
scripts : add script to bench models (
#16894
)
2025-11-02 00:15:31 +02:00
build-info.sh
…
check-requirements.sh
…
compare-commits.sh
scripts: add sqlite3 check for compare-commits.sh (
#15633
)
2025-08-28 19:23:22 +08:00
compare-llama-bench.py
scripts: strip "AMD Instinct" from GPU name (
#15668
)
2025-08-29 22:04:08 +02:00
create_ops_docs.py
…
debug-test.sh
…
fetch_server_test_models.py
…
gen-authors.sh
…
gen-unicode-data.py
…
get_chat_template.py
…
get-flags.mk
…
get-hellaswag.sh
…
get-pg.sh
…
get-wikitext-2.sh
…
get-wikitext-103.sh
…
get-winogrande.sh
…
hf.sh
…
install-oneapi.bat
…
server-bench.py
llama: use FA + max. GPU layers by default (
#15434
)
2025-08-30 16:32:10 +02:00
sync_vendor.py
…
sync-ggml-am.sh
…
sync-ggml.last
sync : ggml
2025-11-05 10:41:51 +02:00
sync-ggml.sh
…
tool_bench.py
server : speed up tests (
#15836
)
2025-09-06 14:45:24 +02:00
tool_bench.sh
…
verify-checksum-models.py
…
xxd.cmake
…