mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	 d32e03f449
			
		
	
	d32e03f449
	
	
	
		
			
			* server : add SWA checkpoints ggml-ci * cont : server clean-up * server : handle state restore fails * llama : add extended llama_state_seq_ API * server : do not make checkpoints if --swa-full ggml-ci * llama : remove flags value for NONE * server : configure number of SWA checkpoints with CLI arg ggml-ci * args : fix scope of new argument
		
			
				
	
	
	
		
			155 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	
			155 KiB