mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	 e81b8e4b7f
			
		
	
	e81b8e4b7f
	
	
	
		
			
			* llama: use max. GPU layers by default, auto -fa * ggml-backend: abort instead of segfault
		
			
				
	
	
	
		
			27 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	
			27 KiB