Commit Graph

1177 Commits

Author SHA1 Message Date
DannyDaemonic
3498588e0f Add --simple-io option for subprocesses and break out console.h and cpp (#1558) master-3498588 2023-08-04 08:20:12 -07:00
Stephen Nichols
5f631c2679 Fixing race condition in server and partial stream handling in frontend. (#2391)
* Fixing race condition in server.cpp and partial stream handling in completion.js

* Reverting assert edits.

* Adding newline to eof
master-5f631c2
2023-08-04 13:37:24 +02:00
l3utterfly
415e99fec2 Stream save llama context data to file instead of allocating entire buffer upfront (#2488)
* added stream saving context data to file to avoid allocating unnecessary amounts of memory

* generalised copying state data to file or buffer

* added comments explaining how copy_state_data works

* fixed trailing whitespaces

* fixed save load state example

* updated save load state to use public function in llama.cpp

* - restored breakage of the llama_copy_state_data API
- moved new logic for copying llama state data to internal function

* fixed function declaration order

* restored save load state example

* fixed whitepace

* removed unused llama-util.h include

* Apply suggestions from code review

Co-authored-by: slaren <slarengh@gmail.com>

* Apply code review suggestions

Co-authored-by: slaren <slarengh@gmail.com>

---------

Co-authored-by: slaren <slarengh@gmail.com>
master-415e99f
2023-08-04 13:29:52 +02:00
Borislav Stanimirov
ff966e7ca6 build : fix several cast and printf warnings (#2499) master-ff966e7 2023-08-04 13:07:21 +03:00
klosax
db5618ad99 cmpnct_gpt2bpe.hpp : comments 2023-08-04 04:57:51 +02:00
klosax
278ada9572 gguf.py : bytesarray for gpt2bpe tokenizer 2023-08-04 04:07:57 +02:00
klosax
fb0b243705 Makefile : remove gptneox-common 2023-08-04 04:02:10 +02:00
klosax
5d98989cf6 gpt2 bpe tokenizer (handles merges and unicode) 2023-08-04 03:58:44 +02:00
klosax
e6f19ba240 gptneox-main.cpp : gpt2 bpe tokenizer 2023-08-04 03:56:37 +02:00
klosax
2922280a1a convert-gptneox-h5-to-gguf.py : gpt2bpe tokenizer 2023-08-04 03:55:23 +02:00
klosax
6691aa8797 Delete gptneox-common.h 2023-08-04 03:52:01 +02:00
klosax
23abbe8e00 Delete gptneox-common.cpp 2023-08-04 03:51:43 +02:00
Evan Jones
8183159cf3 examples : generate JSON according to schema (#1887)
* examples : add JSON schema grammars

* complete JSON grammar

* ensure primitive types can be used as root of schema

* support integer type and adjust usage text
2023-08-02 22:05:44 -04:00
Johannes Gäßler
468ea24fb4 CUDA: faster non k-quant mul_mat_q kernels (#2483) master-468ea24 2023-08-02 18:04:04 +02:00
Johannes Gäßler
4f6b60c776 CUDA: Fix models with output size != 32000 (#2480) master-4f6b60c 2023-08-02 16:48:10 +02:00
klosax
c5ba5efda2 convert-llama-h5-to-gguf.py : special tokens 2023-08-02 11:26:07 +02:00
klosax
e1e9b28547 convert-llama-h5-to-gguf.py : accumulate kv / ti + special tokens 2023-08-02 11:15:33 +02:00
ldwang
220d931864 readme : add Aquila-7B model series to supported models (#2487)
* support bpe tokenizer in convert

Signed-off-by: ldwang <ftgreat@gmail.com>

* support bpe tokenizer in convert

Signed-off-by: ldwang <ftgreat@gmail.com>

* support bpe tokenizer in convert, fix

Signed-off-by: ldwang <ftgreat@gmail.com>

* Add Aquila-7B models in README.md

Signed-off-by: ldwang <ftgreat@gmail.com>

* Up Aquila-7B models in README.md

Signed-off-by: ldwang <ftgreat@gmail.com>

---------

Signed-off-by: ldwang <ftgreat@gmail.com>
Co-authored-by: ldwang <ftgreat@gmail.com>
2023-08-02 11:21:11 +03:00
M. Yusuf Sarıgöz
c3a65c4bbe gguf-util.h : update note 2023-08-02 11:16:23 +03:00
M. Yusuf Sarıgöz
cf365fbc20 gguf : gguf counterpart of llama-util.h 2023-08-02 11:13:56 +03:00
Eve
81844fbcfd tests : Fix compilation warnings (Linux/GCC) (#2451)
* fix hellaswag print format, cast away warning in test-double-float

* c++11 cannot use designated initializers

* add static to test-grad0.c internal functions

* use memcpy in test-double-float.c

* port c tests to c++

* use initializer list for ggml_init_params
master-81844fb
2023-08-02 11:06:19 +03:00
Yiming Cui
a312193e18 readme : Add Chinese LLaMA-2 / Alpaca-2 to supported models (#2475)
* add support for chinese llama-2 / alpaca-2

* remove white spaces
2023-08-02 09:18:31 +03:00
klosax
1b4f9c8eb9 convert-gptneox-h5-to-gguf.py : accumulate kv and ti + special tokens 2023-08-01 23:40:50 +02:00
klosax
49380a23a3 gguf.py : accumulate kv and tensor info data + special tokens 2023-08-01 23:37:48 +02:00
klosax
ff1cb02397 constants.py : special tokens 2023-08-01 23:17:21 +02:00
Bono Lv
c574bddb36 fix a typo in examples/server/README.md (#2478) 2023-08-01 14:54:28 +02:00
klosax
36a36c32a3 Update gptneox-main.cpp 2023-08-01 14:44:28 +02:00
klosax
c77fabb1f9 gptneox-main.cpp : special tokens 2023-08-01 14:32:53 +02:00
klosax
e7a741695c convert-gptneox-h5-to-gguf.py : Special tokens 2023-08-01 14:30:00 +02:00
ebraminio
86aeb27734 server : Support dark mode (#2414)
* server : Support dark mode

So it respects user system light / dark settings.

* Update index.html.hpp by running ./deps.sh
master-86aeb27
2023-08-01 10:56:23 +02:00
Matteo Boschini
1873ff586b metal : add gqa8 kernel to allow llama-2-70B on metal (#2459)
* Added gqa8 kernel to allow llama-2-70B on metal

* Update ggml-metal.m

Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>

* Extend kernel_mul_mat_f16_f32 to handle gqa broadcast

* Added ne03==ne13 assertion

---------

Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
2023-08-01 10:43:12 +03:00
klosax
da4900e835 Update convert-llama-h5-to-gguf.py 2023-07-31 23:04:03 +02:00
M. Yusuf Sarıgöz
f3de876a12 fix : update convert-llama-h5-to-gguf.py 2023-07-31 23:58:29 +03:00
Johannes Gäßler
49e7cb5bb1 CUDA: fixed LLAMA_FAST compilation option (#2473) master-49e7cb5 2023-07-31 21:02:19 +02:00
Johannes Gäßler
b772bba42e CUDA: fixed cmake F16 option (#2471) master-b772bba 2023-07-31 19:52:22 +02:00
M. Yusuf Sarıgöz
bb42aefaeb gguf : mmap tensor data example 2023-07-31 17:46:12 +03:00
Johannes Gäßler
0728c5a8b9 CUDA: mmq CLI option, fixed mmq build issues (#2453) master-0728c5a 2023-07-31 15:44:35 +02:00
M. Yusuf Sarıgöz
b26f5b2e43 gguf : fix typo in function call 2023-07-31 16:23:54 +03:00
Johannes Gäßler
1215ed7d5c CUDA: Implemented row flattening for non-glm RoPE (#2468) master-1215ed7 2023-07-31 14:32:30 +02:00
Johannes Gäßler
2dbf518911 CUDA: fewer memory bank conflicts for mul_mat_q (#2458) master-2dbf518 2023-07-31 13:18:51 +02:00
slaren
9d2382b3e4 Fix Metal backend broken from the allocator changes (#2455)
* fix Metal backend broken from the allocator changes
master-9d2382b
2023-07-31 11:02:53 +02:00
M. Yusuf Sarıgöz
7aa0a0e7f7 gguf : support custom alignment value 2023-07-31 09:59:36 +03:00
klosax
6b3a7b9f4f Update convert-llama-h5-to-gguf.py 2023-07-31 03:02:00 +02:00
klosax
4f5b6224be Update convert-gptneox-h5-to-gguf.py 2023-07-31 03:00:20 +02:00
klosax
2a0914673c Update convert-gptneox-h5-to-gguf.py 2023-07-30 17:31:11 +02:00
klosax
068a8e0fbe Update convert-llama-h5-to-gguf.py 2023-07-30 17:29:56 +02:00
klosax
30c4ea47e6 add gptneox gguf example 2023-07-30 16:59:26 +02:00
klosax
2fabc176ce Update convert-llama-h5-to-gguf.py 2023-07-30 16:28:08 +02:00
slaren
a113689571 ggml : add graph tensor allocator (#2411)
* ggml : add graph tensor allocator

* ggml : don't calculate data pointer of unallocated tensors when creating a view with an offset

* ggml : refactor ggml_view_Nd into ggml_view_tensor_offset
master-a113689
2023-07-30 15:58:01 +02:00
klosax
f175b05872 Makefile : add gptneox gguf example 2023-07-30 15:08:37 +02:00