Logo
Explore Help
Sign In
CS348Project/llama.cpp
5
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-10-28 08:31:25 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
b1de96824bdbeb91ea458abcb3e5478690ad0727
llama.cpp/tests
History
Herman Semenov 5d3de51f97 ggml, common, examples, tests : fixed type arguments in printf (#5528)
2024-02-18 18:20:12 +02:00
..
.gitignore
…
CMakeLists.txt
…
get-model.cpp
…
get-model.h
…
test-autorelease.cpp
ggml : add numa options (#5377)
2024-02-16 11:31:07 +02:00
test-backend-ops.cpp
1.5 bit quantization (#5453)
2024-02-18 18:16:55 +02:00
test-c.c
…
test-double-float.cpp
…
test-grad0.cpp
…
test-grammar-parser.cpp
ggml, common, examples, tests : fixed type arguments in printf (#5528)
2024-02-18 18:20:12 +02:00
test-llama-grammar.cpp
ggml, common, examples, tests : fixed type arguments in printf (#5528)
2024-02-18 18:20:12 +02:00
test-model-load-cancel.cpp
ggml : add numa options (#5377)
2024-02-16 11:31:07 +02:00
test-opt.cpp
…
test-quantize-fns.cpp
ggml : add mmla kernels for quantized GEMM (#4966)
2024-02-11 15:22:33 +02:00
test-quantize-perf.cpp
ggml : add mmla kernels for quantized GEMM (#4966)
2024-02-11 15:22:33 +02:00
test-rope.cpp
…
test-sampling.cpp
…
test-tokenizer-0-falcon.cpp
ggml : add numa options (#5377)
2024-02-16 11:31:07 +02:00
test-tokenizer-0-falcon.py
…
test-tokenizer-0-llama.cpp
ggml : add numa options (#5377)
2024-02-16 11:31:07 +02:00
test-tokenizer-0-llama.py
…
test-tokenizer-1-bpe.cpp
ggml : add numa options (#5377)
2024-02-16 11:31:07 +02:00
test-tokenizer-1-llama.cpp
ggml : add numa options (#5377)
2024-02-16 11:31:07 +02:00
Powered by Gitea Version: 1.24.5 Page: 2312ms Template: 7ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API