Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						4e54be0ec6 
					 
					
						
						
							
							llama/ex: remove --logdir argument ( #10339 )  
						
						
						
						
					 
					
						2024-11-16 23:00:41 +01:00 
						 
				 
			
				
					
						
							
							
								Alexey Parfenov 
							
						 
					 
					
						
						
							
						
						ff7fb670d0 
					 
					
						
						
							
							server : add missing docs ( #10269 )  
						
						
						
						
					 
					
						2024-11-13 13:16:30 +02:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						b141e5f6ef 
					 
					
						
						
							
							server : enable KV cache defrag by default ( #10233 )  
						
						... 
						
						
						
						ggml-ci 
						
						
					 
					
						2024-11-11 08:38:43 +02:00 
						 
				 
			
				
					
						
							
							
								Xuan Son Nguyen 
							
						 
					 
					
						
						
							
						
						a71d81cf8c 
					 
					
						
						
							
							server : revamp chat UI with vuejs and daisyui ( #10175 )  
						
						... 
						
						
						
						* server : simple chat UI with vuejs and daisyui
* move old files to legacy folder
* embed deps into binary
* basic markdown support
* add conversation history, save to localStorage
* fix bg-base classes
* save theme preferences
* fix tests
* regenerate, edit, copy buttons
* small fixes
* docs: how to use legacy ui
* better error handling
* make CORS preflight more explicit
* add GET method for CORS
* fix tests
* clean up a bit
* better auto scroll
* small fixes
* use collapse-arrow
* fix closeAndSaveConfigDialog
* small fix
* remove console.log
* fix style for <pre> element
* lighter bubble color (less distract when reading) 
						
						
					 
					
						2024-11-07 17:31:10 -04:00 
						 
				 
			
				
					
						
							
							
								Xuan Son Nguyen 
							
						 
					 
					
						
						
							
						
						9e0ecfb697 
					 
					
						
						
							
							server : clarify /slots endpoint, add is_processing ( #10162 )  
						
						... 
						
						
						
						* server : clarify /slots endpoint, add is_processing
* fix tests 
						
						
					 
					
						2024-11-04 16:33:29 +01:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						8d8ff71536 
					 
					
						
						
							
							llama : remove Tail-Free sampling ( #10071 )  
						
						... 
						
						
						
						ggml-ci 
						
						
					 
					
						2024-10-29 10:42:05 +02:00 
						 
				 
			
				
					
						
							
							
								wwoodsTM 
							
						 
					 
					
						
						
							
						
						ff252ea48e 
					 
					
						
						
							
							llama : add DRY sampler ( #9702 )  
						
						... 
						
						
						
						* sampling : add DRY sampler (post-refactor)
* DRY: Trying to fix coauthors, removed unneeded line
* DRY: Fixed redundant code
* DRY: Fixed crash issue due to DRY being in chain but uninitialized
---------
Co-authored-by: l3utterfly <gc.pthzfoldr@gmail.com >
Co-authored-by: pi6am <34464159+pi6am@users.noreply.github.com > 
						
						
					 
					
						2024-10-25 19:07:34 +03:00 
						 
				 
			
				
					
						
							
							
								Xuan Son Nguyen 
							
						 
					 
					
						
						
							
						
						958367bf53 
					 
					
						
						
							
							server : refactor slot input data, move tokenizer to HTTP thread ( #10023 )  
						
						... 
						
						
						
						* server : refactor slot input data, move tokenizer to HTTP thread
* move prompt_tokens.empty() check
* fix incorrect if branch
* fix infinite generation loop
* bring back infill validation
* add infill test
* try fixing format_infill
* fix test
* remove redundant code
* rename completion to inference
* update docs
* use llama_tokens everywhere 
						
						
					 
					
						2024-10-24 21:51:22 +02:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						8901755ba3 
					 
					
						
						
							
							server : add n_indent parameter for line indentation requirement ( #9929 )  
						
						... 
						
						
						
						ggml-ci 
						
						
					 
					
						2024-10-18 07:32:19 +03:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						223c25a72f 
					 
					
						
						
							
							server : improve infill context reuse ( #9894 )  
						
						... 
						
						
						
						ggml-ci 
						
						
					 
					
						2024-10-15 16:28:55 +03:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						d4c19c0f5c 
					 
					
						
						
							
							server : accept extra_context for the infill endpoint ( #9874 )  
						
						... 
						
						
						
						* server : accept extra_context for the infill endpoint
ggml-ci
* server : update readme [no ci]
* server : use repo-level FIM pattern if possible
ggml-ci 
						
						
					 
					
						2024-10-13 21:31:35 +03:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						c7181bd294 
					 
					
						
						
							
							server : reuse cached context chunks ( #9866 )  
						
						... 
						
						
						
						ggml-ci 
						
						
					 
					
						2024-10-13 18:52:48 +03:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						edc265661c 
					 
					
						
						
							
							server : add option to time limit the generation phase ( #9865 )  
						
						... 
						
						
						
						ggml-ci 
						
						
					 
					
						2024-10-12 16:14:27 +03:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						1bde94dd02 
					 
					
						
						
							
							server : remove self-extend features ( #9860 )  
						
						... 
						
						
						
						* server : remove self-extend
ggml-ci
* server : fix context limit check to use slot.n_past
ggml-ci 
						
						
					 
					
						2024-10-12 16:06:31 +03:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						95c76e8e92 
					 
					
						
						
							
							server : remove legacy system_prompt feature ( #9857 )  
						
						... 
						
						
						
						* server : remove legacy system_prompt feature
ggml-ci
* readme : update [no ci]
* server : fix non-transformer logic + remove response from /props 
						
						
					 
					
						2024-10-12 14:51:54 +03:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						11ac9800af 
					 
					
						
						
							
							llama : improve infill support and special token detection ( #9798 )  
						
						... 
						
						
						
						* llama : improve infill support
ggml-ci
* llama : add more FIM token strings
ggml-ci
* server : update prompt on slot restore (#9800 )
* gguf : deprecate old FIM token KVs 
						
						
					 
					
						2024-10-12 08:21:51 +03:00 
						 
				 
			
				
					
						
							
							
								Xuan Son Nguyen 
							
						 
					 
					
						
						
							
						
						458367a906 
					 
					
						
						
							
							server : better security control for public deployments ( #9776 )  
						
						... 
						
						
						
						* server : more explicit endpoint access settings
* protect /props endpoint
* fix tests
* update server docs
* fix typo
* fix tests 
						
						
					 
					
						2024-10-08 13:27:04 +02:00 
						 
				 
			
				
					
						
							
							
								Daniel Kleine 
							
						 
					 
					
						
						
							
						
						133c7b46b3 
					 
					
						
						
							
							Fixed RNG seed docs ( #9723 )  
						
						... 
						
						
						
						* Update README.md
fixed RNG seed info
* changed print format to unsigned 
						
						
					 
					
						2024-10-04 10:54:44 +02:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						f4d2b8846a 
					 
					
						
						
							
							llama : add reranking support ( #9510 )  
						
						... 
						
						
						
						* py : add XLMRobertaForSequenceClassification [no ci]
* py : fix scalar-tensor conversion [no ci]
* py : fix position embeddings chop [no ci]
* llama : read new cls tensors [no ci]
* llama : add classigication head (wip) [no ci]
* llama : add "rank" pooling type
ggml-ci
* server : add rerank endpoint
ggml-ci
* llama : aboud ggml_repeat during classification
* rerank : cleanup + comments
* server : accept /rerank endpoint in addition to /v1/rerank [no ci]
* embedding : parse special tokens
* jina : support v1 reranker
* vocab : minor style
ggml-ci
* server : initiate tests for later
ggml-ci
* server : add docs
* llama : add comment [no ci]
* llama : fix uninitialized tensors
* ci : add rerank tests
ggml-ci
* add reranking test
* change test data
* Update examples/server/server.cpp
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com >
* add `--reranking` argument
* update server docs
* llama : fix comment [no ci]
ggml-ci
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co >
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com > 
						
						
					 
					
						2024-09-28 17:42:03 +03:00 
						 
				 
			
				
					
						
							
							
								Xuan Son Nguyen 
							
						 
					 
					
						
						
							
						
						afbbfaa537 
					 
					
						
						
							
							server : add more env vars, improve gen-docs ( #9635 )  
						
						... 
						
						
						
						* server : add more env vars, improve gen-docs
* update server docs
* LLAMA_ARG_NO_CONTEXT_SHIFT 
						
						
					 
					
						2024-09-25 14:05:13 +02:00 
						 
				 
			
				
					
						
							
							
								Xuan Son Nguyen 
							
						 
					 
					
						
						
							
						
						0b3bf966f4 
					 
					
						
						
							
							server : add --no-context-shift option ( #9607 )  
						
						... 
						
						
						
						* server : add --no-context-shift option
* small fix
* Update examples/server/tests/features/embeddings.feature
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* tests : minor fix
* revert usage of GGML_ASSERT
* update server documentation
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com > 
						
						
					 
					
						2024-09-23 22:23:54 +02:00 
						 
				 
			
				
					
						
							
							
								Vinesh Janarthanan 
							
						 
					 
					
						
						
							
						
						8a308354f6 
					 
					
						
						
							
							server : match OAI structured output response ( #9527 )  
						
						
						
						
					 
					
						2024-09-18 09:50:34 +03:00 
						 
				 
			
				
					
						
							
							
								Bert Wagner 
							
						 
					 
					
						
						
							
						
						8b836ae731 
					 
					
						
						
							
							arg : add env variable for parallel ( #9513 )  
						
						... 
						
						
						
						* add env variable for parallel
* Update README.md with env:  LLAMA_ARG_N_PARALLEL 
						
						
					 
					
						2024-09-17 16:35:38 +03:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						6262d13e0b 
					 
					
						
						
							
							common : reimplement logging ( #9418 )  
						
						... 
						
						
						
						https://github.com/ggerganov/llama.cpp/pull/9418  
					
						2024-09-15 20:46:12 +03:00 
						 
				 
			
				
					
						
							
							
								Mathijs Henquet 
							
						 
					 
					
						
						
							
						
						78203641fe 
					 
					
						
						
							
							server : Add option to return token pieces in /tokenize endpoint ( #9108 )  
						
						... 
						
						
						
						* server : added with_pieces functionality to /tokenize endpoint
* server : Add tokenize with pieces tests to server.feature
* Handle case if tokenizer splits along utf8 continuation bytes
* Add example of token splitting
* Remove trailing ws
* Fix trailing ws
* Maybe fix ci
* maybe this fix windows ci?
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co > 
						
						
					 
					
						2024-09-12 22:30:11 +02:00 
						 
				 
			
				
					
						
							
							
								Xuan Son Nguyen 
							
						 
					 
					
						
						
							
						
						bfe76d4a17 
					 
					
						
						
							
							common : move arg parser code to arg.cpp ( #9388 )  
						
						... 
						
						
						
						* common : move arg parser to arg.cpp
* better categorize args
* add cmake
* missing climits
* missing cstdarg
* common : more explicit includes
* fix build
* refactor gpt_params_parse
* update server readme
* fix test
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com > 
						
						
					 
					
						2024-09-09 23:36:09 +02:00 
						 
				 
			
				
					
						
							
							
								Xuan Son Nguyen 
							
						 
					 
					
						
						
							
						
						1b9ae5189c 
					 
					
						
						
							
							common : refactor arg parser ( #9308 )  
						
						... 
						
						
						
						* (wip) argparser v3
* migrated
* add test
* handle env
* fix linux build
* add export-docs example
* fix build (2)
* skip build test-arg-parser on windows
* update server docs
* bring back missing --alias
* bring back --n-predict
* clarify test-arg-parser
* small correction
* add comments
* fix args with 2 values
* refine example-specific args
* no more lamba capture
Co-authored-by: slaren@users.noreply.github.com 
* params.sparams
* optimize more
* export-docs --> gen-docs 
						
						
					 
					
						2024-09-07 20:43:51 +02:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						df270ef745 
					 
					
						
						
							
							llama : refactor sampling v2 ( #9294 )  
						
						... 
						
						
						
						- Add `struct llama_sampler` and `struct llama_sampler_i`
- Add `llama_sampler_` API
- Add `llama_sampler_chain_` API for chaining multiple samplers
- Remove `LLAMA_API_INTERNAL`
- Add `llama_perf_` API and remove old `llama_print_timings` and `llama_reset_timings` 
						
						
					 
					
						2024-09-07 15:16:19 +03:00 
						 
				 
			
				
					
						
							
							
								Xuan Son Nguyen 
							
						 
					 
					
						
						
							
						
						a77feb5d71 
					 
					
						
						
							
							server : add some missing env variables ( #9116 )  
						
						... 
						
						
						
						* server : add some missing env variables
* add LLAMA_ARG_HOST to server dockerfile
* also add LLAMA_ARG_CONT_BATCHING 
						
						
					 
					
						2024-08-27 11:07:01 +02:00 
						 
				 
			
				
					
						
							
							
								Xuan Son Nguyen 
							
						 
					 
					
						
						
							
						
						fc54ef0d1c 
					 
					
						
						
							
							server : support reading arguments from environment variables ( #9105 )  
						
						... 
						
						
						
						* server : support reading arguments from environment variables
* add -fa and -dt
* readme : specify non-arg env var 
						
						
					 
					
						2024-08-21 11:04:34 +02:00 
						 
				 
			
				
					
						
							
							
								Xuan Son Nguyen 
							
						 
					 
					
						
						
							
						
						8b3befc0e2 
					 
					
						
						
							
							server : refactor middleware and /health endpoint ( #9056 )  
						
						... 
						
						
						
						* server : refactor middleware and /health endpoint
* move "fail_on_no_slot" to /slots
* Update examples/server/server.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* fix server tests
* fix CI
* update server docs
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com > 
						
						
					 
					
						2024-08-16 17:19:05 +02:00 
						 
				 
			
				
					
						
							
							
								Xuan Son Nguyen 
							
						 
					 
					
						
						
							
						
						1e6f6554aa 
					 
					
						
						
							
							server : add lora hotswap endpoint (WIP) ( #8857 )  
						
						... 
						
						
						
						* server : add lora hotswap endpoint
* handle lora_no_apply
* fix build
* updae docs
* clean up struct def
* fix build
* add LoRA test
* fix style 
						
						
					 
					
						2024-08-06 17:33:39 +02:00 
						 
				 
			
				
					
						
							
							
								Igor Okulist 
							
						 
					 
					
						
						
							
						
						afbbcf3c04 
					 
					
						
						
							
							server : update llama-server embedding flag documentation ( #8779 )  
						
						... 
						
						
						
						Fixes  #8763  
					
						2024-07-31 19:59:09 -04:00 
						 
				 
			
				
					
						
							
							
								Ujjawal Panchal 
							
						 
					 
					
						
						
							
						
						4b0eff3df5 
					 
					
						
						
							
							docs : Quantum -> Quantized ( #8666 )  
						
						... 
						
						
						
						* docfix: imatrix readme, quantum models -> quantized models.
* docfix: server readme: quantum models -> quantized models. 
						
						
					 
					
						2024-07-25 11:13:27 +03:00 
						 
				 
			
				
					
						
							
							
								Jan Boon 
							
						 
					 
					
						
						
							
						
						628154492a 
					 
					
						
						
							
							server : update doc to clarify n_keep when there is bos token ( #8619 )  
						
						
						
						
					 
					
						2024-07-22 11:02:09 +03:00 
						 
				 
			
				
					
						
							
							
								Xuan Son Nguyen 
							
						 
					 
					
						
						
							
						
						4db8f60fe7 
					 
					
						
						
							
							fix ci ( #8494 )  
						
						
						
						
					 
					
						2024-07-15 19:23:10 +02:00 
						 
				 
			
				
					
						
							
							
								M-A 
							
						 
					 
					
						
						
							
						
						f17f39ff9c 
					 
					
						
						
							
							server: update README.md with llama-server --help output [no ci] ( #8472 )  
						
						... 
						
						
						
						The README.md had a stale information. In particular, the --ctx-size
"defaults to 512" confused me and I had to check the code to confirm
this was false. This the server is evolving rapidly, it's probably
better to keep the source of truth at a single place (in the source) and
generate the README.md based on that.
Did:
    make llama-server
    ./llama-server --help > t.txt
    vimdiff t.txt examples/server/README.md
I copied the content inside a backquote block. I would have preferred
proper text but it would require a fair amount of surgery to make the
current output compatible with markdown. A follow up could be to
automate this process with a script.
No functional change. 
						
						
					 
					
						2024-07-15 15:04:56 +03:00 
						 
				 
			
				
					
						
							
							
								Bjarke Viksøe 
							
						 
					 
					
						
						
							
						
						cb4d86c4d7 
					 
					
						
						
							
							server: Retrieve prompt template in /props ( #8337 )  
						
						... 
						
						
						
						* server: Retrieve prompt template in /props
This PR adds the following:
- Expose the model's Jinja2 prompt template from the model in the /props endpoint.
- Change log-level from Error to Warning for warning about template mismatch.
The front-end stands a better chance of actually executing the Jinja template format correctly. Server is currently just guessing it.
Ideally this should have been inside a JSON block that expose the same key/value pairs as listed during startup in "llm_load_print_meta" function.
* Make string buffer dynamic
* Add doc and better string handling
* Using chat_template naming convention
* Use intermediate vector for string assignment 
						
						
					 
					
						2024-07-07 11:10:38 +02:00 
						 
				 
			
				
					
						
							
							
								Pieter Ouwerkerk 
							
						 
					 
					
						
						
							
						
						5a7447c569 
					 
					
						
						
							
							readme : fix minor typos [no ci] ( #8314 )  
						
						
						
						
					 
					
						2024-07-05 09:58:41 +03:00 
						 
				 
			
				
					
						
							
							
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						38373cfbab 
					 
					
						
						
							
							Add SPM infill support ( #8016 )  
						
						... 
						
						
						
						* add --spm-infill option
* support --spm-infill
* support --spm-infill 
						
						
					 
					
						2024-06-28 12:53:43 +02:00 
						 
				 
			
				
					
						
							
							
								Olivier Chafik 
							
						 
					 
					
						
						
							
						
						1c641e6aac 
					 
					
						
						
							
							build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )  
						
						... 
						
						
						
						* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew
* server: update refs -> llama-server
gitignore llama-server
* server: simplify nix package
* main: update refs -> llama
fix examples/main ref
* main/server: fix targets
* update more names
* Update build.yml
* rm accidentally checked in bins
* update straggling refs
* Update .gitignore
* Update server-llm.sh
* main: target name -> llama-cli
* Prefix all example bins w/ llama-
* fix main refs
* rename {main->llama}-cmake-pkg binary
* prefix more cmake targets w/ llama-
* add/fix gbnf-validator subfolder to cmake
* sort cmake example subdirs
* rm bin files
* fix llama-lookup-* Makefile rules
* gitignore /llama-*
* rename Dockerfiles
* rename llama|main -> llama-cli; consistent RPM bin prefixes
* fix some missing -cli suffixes
* rename dockerfile w/ llama-cli
* rename(make): llama-baby-llama
* update dockerfile refs
* more llama-cli(.exe)
* fix test-eval-callback
* rename: llama-cli-cmake-pkg(.exe)
* address gbnf-validator unused fread warning (switched to C++ / ifstream)
* add two missing llama- prefixes
* Updating docs for eval-callback binary to use new `llama-` prefix.
* Updating a few lingering doc references for rename of main to llama-cli
* Updating `run-with-preset.py` to use new binary names.
Updating docs around `perplexity` binary rename.
* Updating documentation references for lookup-merge and export-lora
* Updating two small `main` references missed earlier in the finetune docs.
* Update apps.nix
* update grammar/README.md w/ new llama-* names
* update llama-rpc-server bin name + doc
* Revert "update llama-rpc-server bin name + doc"
This reverts commit e474ef1df4hanclinto@gmail.com > 
						
						
					 
					
						2024-06-13 00:41:52 +01:00 
						 
				 
			
				
					
						
							
							
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						7027b27d76 
					 
					
						
						
							
							server: update cache_prompt documentation [no ci] ( #7745 )  
						
						
						
						
					 
					
						2024-06-07 11:15:49 +02:00 
						 
				 
			
				
					
						
							
							
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						1b01f06db0 
					 
					
						
						
							
							server: add test for token probs ( #7347 )  
						
						
						
						
					 
					
						2024-05-19 16:26:02 +02:00 
						 
				 
			
				
					
						
							
							
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						cb42c29427 
					 
					
						
						
							
							server: correct --threads documentation [no ci] ( #7362 )  
						
						
						
						
					 
					
						2024-05-18 11:10:47 +02:00 
						 
				 
			
				
					
						
							
							
								Leon Knauer 
							
						 
					 
					
						
						
							
						
						9c4fdcbec8 
					 
					
						
						
							
							[Server] Added --verbose option to README [no ci] ( #7335 )  
						
						
						
						
					 
					
						2024-05-17 10:11:03 +10:00 
						 
				 
			
				
					
						
							
							
								Ryuei 
							
						 
					 
					
						
						
							
						
						27f65d6267 
					 
					
						
						
							
							docs: Fix typo and update description for --embeddings flag ( #7026 )  
						
						... 
						
						
						
						- Change '--embedding' to '--embeddings' in the README
- Update the description to match the latest --help output
- Added a caution about defining physical batch size 
						
						
					 
					
						2024-05-14 15:20:47 +10:00 
						 
				 
			
				
					
						
							
							
								Johan 
							
						 
					 
					
						
						
							
						
						911b3900dd 
					 
					
						
						
							
							server : add_special option for tokenize endpoint ( #7059 )  
						
						
						
						
					 
					
						2024-05-08 15:27:58 +03:00 
						 
				 
			
				
					
						
							
							
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						af0a5b6163 
					 
					
						
						
							
							server: fix incorrectly reported token probabilities ( #7125 )  
						
						... 
						
						
						
						* server: normalize token probabilities
* fix temperature == 0.0f 
						
						
					 
					
						2024-05-07 23:07:58 +02:00 
						 
				 
			
				
					
						
							
							
								Kyle Mistele 
							
						 
					 
					
						
						
							
						
						260b7c6529 
					 
					
						
						
							
							server : update readme with undocumented options ( #7013 )  
						
						
						
						
					 
					
						2024-05-07 21:44:29 +03:00 
						 
				 
			
				
					
						
							
							
								Olivier Chafik 
							
						 
					 
					
						
						
							
						
						b8a7a5a90f 
					 
					
						
						
							
							build(cmake): simplify instructions (cmake -B build && cmake --build build ...) ( #6964 )  
						
						... 
						
						
						
						* readme: cmake . -B build && cmake --build build
* build: fix typo
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com >
* build: drop implicit . from cmake config command
* build: remove another superfluous .
* build: update MinGW cmake commands
* Update README-sycl.md
Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com >
* build: reinstate --config Release as not the default w/ some generators + document how to build Debug
* build: revert more --config Release
* build: nit / remove -H from cmake example
* build: reword debug instructions around single/multi config split
---------
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com >
Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com > 
						
						
					 
					
						2024-04-29 17:02:45 +01:00