Olivier Chafik 
							
						 
					 
					
						
						
							
						
						1c641e6aac 
					 
					
						
						
							
							build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )  
						
						... 
						
						
						
						* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew
* server: update refs -> llama-server
gitignore llama-server
* server: simplify nix package
* main: update refs -> llama
fix examples/main ref
* main/server: fix targets
* update more names
* Update build.yml
* rm accidentally checked in bins
* update straggling refs
* Update .gitignore
* Update server-llm.sh
* main: target name -> llama-cli
* Prefix all example bins w/ llama-
* fix main refs
* rename {main->llama}-cmake-pkg binary
* prefix more cmake targets w/ llama-
* add/fix gbnf-validator subfolder to cmake
* sort cmake example subdirs
* rm bin files
* fix llama-lookup-* Makefile rules
* gitignore /llama-*
* rename Dockerfiles
* rename llama|main -> llama-cli; consistent RPM bin prefixes
* fix some missing -cli suffixes
* rename dockerfile w/ llama-cli
* rename(make): llama-baby-llama
* update dockerfile refs
* more llama-cli(.exe)
* fix test-eval-callback
* rename: llama-cli-cmake-pkg(.exe)
* address gbnf-validator unused fread warning (switched to C++ / ifstream)
* add two missing llama- prefixes
* Updating docs for eval-callback binary to use new `llama-` prefix.
* Updating a few lingering doc references for rename of main to llama-cli
* Updating `run-with-preset.py` to use new binary names.
Updating docs around `perplexity` binary rename.
* Updating documentation references for lookup-merge and export-lora
* Updating two small `main` references missed earlier in the finetune docs.
* Update apps.nix
* update grammar/README.md w/ new llama-* names
* update llama-rpc-server bin name + doc
* Revert "update llama-rpc-server bin name + doc"
This reverts commit e474ef1df4hanclinto@gmail.com > 
						
						
					 
					
						2024-06-13 00:41:52 +01:00 
						 
				 
			
				
					
						
							
							
								Ahmet Zeer 
							
						 
					 
					
						
						
							
						
						07cd41d096 
					 
					
						
						
							
							TypoFix ( #7162 )  
						
						
						
						
					 
					
						2024-05-09 10:16:45 +02:00 
						 
				 
			
				
					
						
							
							
								fraxy-v 
							
						 
					 
					
						
						
							
						
						92397d87a4 
					 
					
						
						
							
							convert-llama2c-to-ggml : enable conversion of GQA models ( #6237 )  
						
						... 
						
						
						
						* convert-llama2c-to-ggml: enable conversion of multiqueries, #5608 
* add test in build action
* Update build.yml
* Update build.yml
* Update build.yml
* gg patch 
						
						
					 
					
						2024-03-22 20:49:06 +02:00 
						 
				 
			
				
					
						
							
							
								Olivier Chafik 
							
						 
					 
					
						
						
							
						
						230d46c723 
					 
					
						
						
							
							examples : update llama2.c converter to read vocab and write models in GGUF format ( #2751 )  
						
						... 
						
						
						
						* llama2.c: direct gguf output (WIP)
* Simplify vector building logic
* llama2.c gguf conversion: fix token types in converter
* llama2.c: support copying vocab from a llama gguf model file
* llama2.c: update default path for vocab model + readme
* llama2.c: use defines for gguf keys
* llama2.c: escape whitespaces w/ U+2581 in vocab converter the llama.cpp way
* llama2.c converter: cleanups + take n_ff from config 
						
						
					 
					
						2023-08-27 17:13:31 +03:00 
						 
				 
			
				
					
						
							
							
								Olivier Chafik 
							
						 
					 
					
						
						
							
						
						95385241a9 
					 
					
						
						
							
							examples : restore the functionality to import llama2.c models ( #2685 )  
						
						... 
						
						
						
						* Fix import of llama2.c models that don't share weights between embedding layers
* llama2c: reinstate ggmlv3 conversion output + update readme w/ gguf conv
* llama2.c: comment out legacy "load from ggml model" logic
* llama2.c: convert special-cased "<0xXX>" single byte tokens from tokenizer.bin 
						
						
					 
					
						2023-08-23 22:33:05 +03:00 
						 
				 
			
				
					
						
							
							
								byte-6174 
							
						 
					 
					
						
						
							
						
						b19edd54d5 
					 
					
						
						
							
							Adding support for llama2.c models ( #2559 )  
						
						
						
						
					 
					
						2023-08-12 01:17:25 +02:00