mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-11-03 09:22:01 +00:00 
			
		
		
		
	Support diffusion models: Add Dream 7B (#14644)
* Support diffusion models: Add Dream 7B * Move diffusion to examples * Move stuff to examples. Add patch to not use kv-cache * Address review comments * Make sampling fast * llama: remove diffusion functions * Add basic timings + cleanup * More cleanup * Review comments: better formating, use LOG instead std::cerr, re-use batch, use ubatch instead of max_length * fixup! * Review: move everything to diffusion-cli for now
This commit is contained in:
		@@ -101,6 +101,7 @@ struct llama_vocab {
 | 
			
		||||
    llama_token token_sep() const;
 | 
			
		||||
    llama_token token_nl () const;
 | 
			
		||||
    llama_token token_pad() const;
 | 
			
		||||
    llama_token token_mask() const;
 | 
			
		||||
 | 
			
		||||
    llama_token token_prefix() const;
 | 
			
		||||
    llama_token token_middle() const;
 | 
			
		||||
 
 | 
			
		||||
		Reference in New Issue
	
	Block a user