mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	llama : remove n_threads from llama_decode_internal (#3614)
This commit removes `n_threads` from the `llama_decode_internal`
functions doc comment as it does not exist anymore.
It looks like this parameter was removed in
Commit 16bc66d947 ("llama.cpp : split
llama_context_params into model and context params").
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
			
			
This commit is contained in:
		
		Reference in New Issue
	
	Block a user
	 Daniel Bevenius
					Daniel Bevenius