mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-11-02 09:12:03 +00:00 
			
		
		
		
	* llama : fix session saving/loading * llama : temp fix for clearing "future" tokens from the KV cache * llama : fix handling of "future" tokens when loading sessions * llama : fix comments for llama_kv_cache API
		
			
				
	
	
	
		
			4.9 KiB
		
	
	
	
	
		
			Executable File
		
	
	
	
	
			
		
		
	
	
			4.9 KiB
		
	
	
	
	
		
			Executable File