mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	 e920ed393d
			
		
	
	e920ed393d
	
	
	
		
			
			* Fix Vulkan on Intel ARC Optimize matmul for Intel ARC Add Vulkan dequant test * Add Vulkan debug and validate flags to Make and CMakeLists.txt * Enable asynchronous transfers in Vulkan backend * Fix flake8 * Disable Vulkan async backend functions for now * Also add Vulkan run tests command to Makefile and CMakeLists.txt
		
			
				
	
	
	
		
			3.0 MiB
		
	
	
	
	
	
	
	
			
		
		
	
	
			3.0 MiB