Commit Graph

339 Commits

Author SHA1 Message Date
slaren
72e7ef4e53 simple : fixes 2023-09-26 23:19:36 +02:00
Georgi Gerganov
8845160058 simple : add README.md 2023-09-21 20:10:14 +02:00
Georgi Gerganov
b2debf65f2 parallel : add disabled experimental batch chunking in powers of two 2023-09-20 20:14:05 +03:00
Georgi Gerganov
ded9b43cad parallel : fix cases where the input prompts can overflow the batch 2023-09-20 19:09:25 +03:00
Georgi Gerganov
ee1d670cc6 parallel : fix bug (extra BOS) + smaller token_prev array 2023-09-20 17:32:21 +03:00
Georgi Gerganov
2f3a46fccf train : make KQ_pos memory buffer permanent via dummy scale op 2023-09-20 14:14:50 +03:00
Georgi Gerganov
db0fc2da06 simple : improve comments + free batch 2023-09-20 13:54:20 +03:00
Georgi Gerganov
b377bf2266 simple : add parallel decoding support 2023-09-20 13:06:34 +03:00
Georgi Gerganov
addae65fd4 llama : improve llama_batch API + simplify parallel example 2023-09-20 11:03:18 +03:00
Georgi Gerganov
a1327c71c6 parallel : rename hot-plug to continuous-batching 2023-09-20 09:24:41 +03:00
Georgi Gerganov
7b7472ee26 parallel : minor 2023-09-20 00:35:10 +03:00
Georgi Gerganov
6028879f56 parallel : print misses on each request 2023-09-19 23:50:05 +03:00
Georgi Gerganov
eed3fd4234 parallel : count cache misses 2023-09-19 23:47:47 +03:00
Georgi Gerganov
8a9aca37c1 parallel : remove question with short answers 2023-09-19 23:34:30 +03:00
Georgi Gerganov
4b5f3cd6bf parallel : process system prompt once + configurable paramters + llama API 2023-09-19 17:00:42 +03:00
Georgi Gerganov
82e20e9ba0 parallel : remove new line from prompt 2023-09-19 13:54:41 +03:00
Georgi Gerganov
16090a5dde parallel : fix sequence termination criteria 2023-09-19 13:29:29 +03:00
Georgi Gerganov
806d397c1a parallel : try smaller batches when the KV cache is fragmented 2023-09-19 13:21:36 +03:00
Georgi Gerganov
36714e16d0 parallel : various improvements 2023-09-19 12:29:37 +03:00
Georgi Gerganov
467e307931 simple : fix token counting 2023-09-19 11:45:33 +03:00
Georgi Gerganov
daf4c6d360 llama : fix worst case graph build 2023-09-19 11:05:08 +03:00
Georgi Gerganov
fa0e677820 llama : extend batch API to select which logits to output 2023-09-19 00:24:13 +03:00
Georgi Gerganov
897caccdf4 fixes : speculative KV cache + llama worst-case graph 2023-09-18 22:32:28 +03:00
Georgi Gerganov
466b513851 parallel : disable hot-plug to avoid cache fragmentation 2023-09-18 21:34:20 +03:00
Georgi Gerganov
0161372b9a parallel : example for serving multiple users in parallel 2023-09-18 20:37:28 +03:00
Georgi Gerganov
1f17ea631c speculative : fix KV cache management 2023-09-18 19:01:20 +03:00
Georgi Gerganov
0cbf3bfef8 llama : add llama_kv_cache_shift_seq + no more context swaps 2023-09-18 18:10:43 +03:00
Georgi Gerganov
f015b26689 llama : more robust cell_max heuristic + wip shift 2023-09-18 17:15:58 +03:00
Georgi Gerganov
4d76d762ef llama : extend llama_kv_cache API 2023-09-18 15:53:03 +03:00
Georgi Gerganov
9f42e75489 llama : add new llama_decode() API that works with llama_batch 2023-09-18 14:23:52 +03:00
Georgi Gerganov
58bb5110ca Merge branch 'master' into custom-attention-mask 2023-09-18 11:15:18 +03:00
Georgi Gerganov
d29e76937c llama : unified KV cache + batch inference API 2023-09-18 11:08:15 +03:00
Georgi Gerganov
1fb033fd85 ggml : ggml_rope now takes a vector with positions instead of n_past 2023-09-17 21:17:10 +03:00
goerch
b08e75baea Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 (#3170)
* Fix für #2721

* Reenable tokenizer test for LLaMa

* Add `console.cpp` dependency

* Fix dependency to `common`

* Fixing wrong fix.

* Make console usage platform specific

Work on compiler warnings.

* Adapting makefile

* Remove trailing whitespace

* Adapting the other parts of the makefile

* Fix typo.

* Fixing the last deviations from sentencepiece indicated by test-tokenizer-1

* Simplify logic

* Add missing change...

* Fix ugly compiler warning

* llama_tokenize should accept strings containing NUL now

* Adding huichen's test case
2023-09-16 13:41:33 +02:00
Cebtenzzre
e6616cf0db examples : add compiler version and target to build info (#2998) 2023-09-15 16:59:49 -04:00
Cebtenzzre
3aefaab9e5 check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
Georgi Gerganov
8c00b7a6ff sync : ggml (Metal F32 support + reduce ggml-alloc size) (#3192)
* sync : ggml (Metal F32 support + reduce ggml-alloc size)

ggml-ci

* llama-bench : fix ggml_cpu_has_metal() duplicate function

ggml-ci
2023-09-15 19:06:03 +03:00
Roland
2d770505a8 llama : remove mtest (#3177)
* Remove mtest

* remove from common/common.h and examples/main/main.cpp
2023-09-15 10:28:45 +03:00
bandoti
990a5e226a cmake : add relocatable Llama package (#2960)
* Keep static libs and headers with install

* Add logic to generate Config package

* Use proper build info

* Add llama as import library

* Prefix target with package name

* Add example project using CMake package

* Update README

* Update README

* Remove trailing whitespace
2023-09-14 20:04:40 +03:00
Leng Yue
35f73049af speculative : add heuristic algorithm (#3006)
* Add heuristic algo for speculative

* Constrain minimum n_draft to 2

* speculative : improve heuristic impl

* speculative : be more rewarding upon guessing max drafted tokens

* speculative : fix typos

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-14 19:14:44 +03:00
FK
84e723653c speculative: add --n-gpu-layers-draft option (#3063) 2023-09-13 08:50:46 +02:00
Cebtenzzre
e64f5b5578 examples : make n_ctx warning work again (#3066)
This was broken by commit e36ecdcc ("build : on Mac OS enable Metal by
default (#2901)").
2023-09-08 11:43:35 -04:00
Przemysław Pawełczyk
cb6c44c5e0 build : do not use _GNU_SOURCE gratuitously (#2035)
* Do not use _GNU_SOURCE gratuitously.

What is needed to build llama.cpp and examples is availability of
stuff defined in The Open Group Base Specifications Issue 6
(https://pubs.opengroup.org/onlinepubs/009695399/) known also as
Single Unix Specification v3 (SUSv3) or POSIX.1-2001 + XSI extensions,
plus some stuff from BSD that is not specified in POSIX.1.

Well, that was true until NUMA support was added recently,
so enable GNU libc extensions for Linux builds to cover that.

Not having feature test macros in source code gives greater flexibility
to those wanting to reuse it in 3rd party app, as they can build it with
FTMs set by Makefile here or other FTMs depending on their needs.

It builds without issues in Alpine (musl libc), Ubuntu (glibc), MSYS2.

* make : enable Darwin extensions for macOS to expose RLIMIT_MEMLOCK

* make : enable BSD extensions for DragonFlyBSD to expose RLIMIT_MEMLOCK

* make : use BSD-specific FTMs to enable alloca on BSDs

* make : fix OpenBSD build by exposing newer POSIX definitions

* cmake : follow recent FTM improvements from Makefile
2023-09-08 15:09:21 +03:00
Cebtenzzre
00d62adb79 fix some warnings from gcc and clang-tidy (#3038)
Co-authored-by: xaedes <xaedes@gmail.com>
2023-09-07 13:22:29 -04:00
slaren
15b67a66c2 llama-bench : use two tokens in the warmup run for prompt evals (#3059) 2023-09-07 15:52:34 +02:00
Cebtenzzre
de2fe892af examples : replace fprintf to stdout with printf (#3017) 2023-09-05 15:10:27 -04:00
Georgi Gerganov
921772104b speculative : add grammar support (#2991)
* speculative : add grammar support

* grammars : add json_arr.gbnf

* grammar : add comments to new grammar file

* grammar : remove one nested level

* common : warm-up with 2 tokens - seems to work better

* speculative : print draft token pieces

* speculative : reuse grammar parser + better logs and comments

* speculative : avoid grammar_mem

* make : fix speculative build
2023-09-05 08:46:17 +03:00
Georgi Gerganov
e36ecdccc8 build : on Mac OS enable Metal by default (#2901)
* build : on Mac OS enable Metal by default

* make : try to fix build on Linux

* make : move targets back to the top

* make : fix target clean

* llama : enable GPU inference by default with Metal

* llama : fix vocab_only logic when GPU is enabled

* common : better `n_gpu_layers` assignment

* readme : update Metal instructions

* make : fix merge conflict remnants

* gitignore : metal
2023-09-04 22:26:24 +03:00
Cebtenzzre
3103568144 llama-bench : make cpp file non-executable (#2999) 2023-09-04 13:40:18 +03:00
Aarni Koskela
e4386f417f server : add a subtle loading animation to the edit box (#2466)
* editorconfig: add override for the server HTML (which already is 2-space indented)

* server: add a subtle loading animation to the edit box
2023-09-04 16:28:55 +08:00