Aman Gupta 
							
						 
					 
					
						
						
							
						
						ab14019821 
					 
					
						
						
							
							Support diffusion models: Add Dream 7B ( #14644 )  
						
						 
						
						... 
						
						
						
						* Support diffusion models: Add Dream 7B
* Move diffusion to examples
* Move stuff to examples. Add patch to not use kv-cache
* Address review comments
* Make sampling fast
* llama: remove diffusion functions
* Add basic timings + cleanup
* More cleanup
* Review comments: better formating, use LOG instead std::cerr, re-use batch, use ubatch instead of max_length
* fixup!
* Review: move everything to diffusion-cli for now 
						
						
					 
					
						2025-07-16 20:03:51 +08:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Gabriel Larson 
							
						 
					 
					
						
						
							
						
						4a4f426944 
					 
					
						
						
							
							model : add Kimi-K2 support ( #14654 )  
						
						 
						
						... 
						
						
						
						* Kimi-K2 conversion
* add Kimi_K2  pre type
* Kimi-K2
* Kimi-K2 unicode
* Kimi-K2
* LLAMA_MAX_EXPERTS 384
* fix vocab iteration
* regex space fix
* add kimi-k2 to pre_computed_hashes
* Updated with kimi-k2 get_vocab_base_pre hash
* fix whitespaces
* fix flake errors
* remove more unicode.cpp whitespaces
* change set_vocab() flow
* add moonshotai-Kimi-K2.jinja to /models/templates/
* update moonshotai-Kimi-K2.jinja
* add kimi-k2 chat template
* add kimi-k2
* update NotImplementedError
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* except Exception
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* LLM_CHAT_TEMPLATE_KIMI_K2 if(add_ass){}
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com > 
						
						
					 
					
						2025-07-15 21:54:22 +02:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						0d5375d54b 
					 
					
						
						
							
							llama : move enum llama_vocab_pre_type to implementation ( #14631 )  
						
						 
						
						... 
						
						
						
						ggml-ci 
						
						
					 
					
						2025-07-11 13:46:07 +03:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						88fc854b4b 
					 
					
						
						
							
							llama : improve sep token handling ( #14272 )  
						
						 
						
						
						
						
					 
					
						2025-06-20 14:04:09 +02:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						10d2af0eaa 
					 
					
						
						
							
							llama/ggml: add LLM training support ( #10544 )  
						
						 
						
						... 
						
						
						
						* llama/ggml: add LLM training support
more compact progress bar
llama_save_model_to_file
llama_opt_param_filter
ggml_graph_dup force_grads
refactor ggml_opt, fix test-opt
* remove logits_all
* refactor CUDA implementation for ACC
* reset graph at beginning of opt period 
						
						
					 
					
						2025-05-12 14:44:49 +02:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						08f10f69c3 
					 
					
						
						
							
							llama : remove notion of CLS token ( #11064 )  
						
						 
						
						... 
						
						
						
						ggml-ci 
						
						
					 
					
						2025-01-12 12:15:53 +02:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						afa8a9ec9b 
					 
					
						
						
							
							llama : add llama_vocab, functions -> methods, naming ( #11110 )  
						
						 
						
						... 
						
						
						
						* llama : functions -> methods (#11110 )
* llama : add struct llama_vocab to the API (#11156 )
ggml-ci
* hparams : move vocab params to llama_vocab (#11159 )
ggml-ci
* vocab : more pimpl (#11165 )
ggml-ci
* vocab : minor tokenization optimizations (#11160 )
ggml-ci
Co-authored-by: Diego Devesa <slarengh@gmail.com >
* lora : update API names (#11167 )
ggml-ci
* llama : update API names to use correct prefix (#11174 )
* llama : update API names to use correct prefix
ggml-ci
* cont
ggml-ci
* cont
ggml-ci
* minor [no ci]
* vocab : llama_vocab_add_[be]os -> llama_vocab_get_add_[be]os (#11174 )
ggml-ci
* vocab : llama_vocab_n_vocab -> llama_vocab_n_tokens (#11174 )
ggml-ci
---------
Co-authored-by: Diego Devesa <slarengh@gmail.com > 
						
						
					 
					
						2025-01-12 11:32:42 +02:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						f66f582927 
					 
					
						
						
							
							llama : refactor src/llama.cpp ( #10902 )  
						
						 
						
						... 
						
						
						
						* llama : scatter llama.cpp into multiple modules (wip)
* llama : control-vector -> adapter
* llama : arch
* llama : mmap
ggml-ci
* ci : remove BUILD_SHARED_LIBS=OFF
ggml-ci
* llama : arch (cont)
ggml-ci
* llama : chat
ggml-ci
* llama : model
ggml-ci
* llama : hparams
ggml-ci
* llama : adapter
ggml-ci
* examples : fix
ggml-ci
* rebase
ggml-ci
* minor
* llama : kv cache
ggml-ci
* llama : impl
ggml-ci
* llama : batch
ggml-ci
* cont
ggml-ci
* llama : context
ggml-ci
* minor
* llama : context (cont)
ggml-ci
* llama : model loader
ggml-ci
* common : update lora
ggml-ci
* llama : quant
ggml-ci
* llama : quant (cont)
ggml-ci
* minor [no ci] 
						
						
					 
					
						2025-01-03 10:18:53 +02:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						30caac3a68 
					 
					
						
						
							
							llama : the WPM vocabs use the CLS token as BOS ( #10930 )  
						
						 
						
						... 
						
						
						
						* llama : the WPM vocabs use the CLS token as BOS
ggml-ci
* llama : add comment 
						
						
					 
					
						2024-12-24 09:44:20 +02:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								wwoodsTM 
							
						 
					 
					
						
						
							
						
						ff252ea48e 
					 
					
						
						
							
							llama : add DRY sampler ( #9702 )  
						
						 
						
						... 
						
						
						
						* sampling : add DRY sampler (post-refactor)
* DRY: Trying to fix coauthors, removed unneeded line
* DRY: Fixed redundant code
* DRY: Fixed crash issue due to DRY being in chain but uninitialized
---------
Co-authored-by: l3utterfly <gc.pthzfoldr@gmail.com >
Co-authored-by: pi6am <34464159+pi6am@users.noreply.github.com > 
						
						
					 
					
						2024-10-25 19:07:34 +03:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						755a9b2bf0 
					 
					
						
						
							
							llama : add infill sampler ( #9896 )  
						
						 
						
						... 
						
						
						
						ggml-ci 
						
						
					 
					
						2024-10-15 16:35:33 +03:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						11ac9800af 
					 
					
						
						
							
							llama : improve infill support and special token detection ( #9798 )  
						
						 
						
						... 
						
						
						
						* llama : improve infill support
ggml-ci
* llama : add more FIM token strings
ggml-ci
* server : update prompt on slot restore (#9800 )
* gguf : deprecate old FIM token KVs 
						
						
					 
					
						2024-10-12 08:21:51 +03:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						8c475b97b8 
					 
					
						
						
							
							rerank : use [SEP] token instead of [BOS] ( #9737 )  
						
						 
						
						... 
						
						
						
						* rerank : use [SEP] token instead of [BOS]
ggml-ci
* common : sanity check for non-NULL tokens
ggml-ci
* ci : adjust rank score interval
ggml-ci
* ci : add shebang to run.sh
ggml-ci 
						
						
					 
					
						2024-10-05 15:55:04 +03:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Zhenwei Jin 
							
						 
					 
					
						
						
							
						
						6102037bbb 
					 
					
						
						
							
							vocab : refactor tokenizer to reduce init overhead ( #9449 )  
						
						 
						
						... 
						
						
						
						* refactor tokenizer
* llama : make llm_tokenizer more private
ggml-ci
* refactor tokenizer
* refactor tokenizer
* llama : make llm_tokenizer more private
ggml-ci
* remove unused files
* remove unused fileds to avoid unused filed build error
* avoid symbol link error
* Update src/llama.cpp
* Update src/llama.cpp
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com > 
						
						
					 
					
						2024-09-28 15:10:58 +03:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						31ac5834fe 
					 
					
						
						
							
							llama : keep track of all EOG tokens in the vocab ( #9609 )  
						
						 
						
						... 
						
						
						
						ggml-ci 
						
						
					 
					
						2024-09-24 10:16:06 +03:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						df270ef745 
					 
					
						
						
							
							llama : refactor sampling v2 ( #9294 )  
						
						 
						
						... 
						
						
						
						- Add `struct llama_sampler` and `struct llama_sampler_i`
- Add `llama_sampler_` API
- Add `llama_sampler_chain_` API for chaining multiple samplers
- Remove `LLAMA_API_INTERNAL`
- Add `llama_perf_` API and remove old `llama_print_timings` and `llama_reset_timings` 
						
						
					 
					
						2024-09-07 15:16:19 +03:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Zhenwei Jin 
							
						 
					 
					
						
						
							
						
						4af8420afb 
					 
					
						
						
							
							common : remove duplicate function llama_should_add_bos_token ( #8778 )  
						
						 
						
						
						
						
					 
					
						2024-08-15 10:23:23 +03:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								fairydreaming 
							
						 
					 
					
						
						
							
						
						d3f0c7166a 
					 
					
						
						
							
							Stop the generation when <|eom_id|> token is encountered - needed for Llama 3.1 tool call support ( #8858 )  
						
						 
						
						... 
						
						
						
						* gguf-py, llama : add constants and methods related to Llama-3.1 <|eom_id|> token
* llama : find Llama-3.1 <|eom_id|> token id during vocab loading
* llama-vocab : add Llama-3.1 <|eom_id|> token to the set of tokens stopping the generation
---------
Co-authored-by: Stanisław Szymczyk <sszymczy@gmail.com > 
						
						
					 
					
						2024-08-05 09:38:01 +02:00  
					
					
						 
						
						
							
							
							 
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						938943cdbf 
					 
					
						
						
							
							llama : move vocab, grammar and sampling into separate files ( #8508 )  
						
						 
						
						... 
						
						
						
						* llama : move sampling code into llama-sampling
ggml-ci
* llama : move grammar code into llama-grammar
ggml-ci
* cont
ggml-ci
* cont : pre-fetch rules
* cont
ggml-ci
* llama : deprecate llama_sample_grammar
* llama : move tokenizers into llama-vocab
ggml-ci
* make : update llama.cpp deps [no ci]
* llama : redirect external API to internal APIs
ggml-ci
* llama : suffix the internal APIs with "_impl"
ggml-ci
* llama : clean-up 
						
						
					 
					
						2024-07-23 13:10:17 +03:00