mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	 3600cc2886
			
		
	
	3600cc2886
	
	
	
		
			
			* llama : use n_swa + n_ubatch cells for SWA cache ggml-ci * llama : add warning about multi-sqeuence SWA contexts
		
			
				
	
	
	
		
			196 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	
			196 KiB