mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	 2cd43f4900
			
		
	
	2cd43f4900
	
	
	
		
			
			* more perfo with llamafile tinyblas on x86_64. - add bf16 suport - change dispache strategie (thanks: https://github.com/ikawrakow/ik_llama.cpp/pull/71 ) - reduce memory bandwidth simple tinyblas dispache and more cache freindly * tinyblas dynamic dispaching * sgemm: add M blocs. * - git 2.47 use short id of len 9. - show-progress is not part of GNU Wget2 * remove not stable test
		
			
				
	
	
	
		
			14 KiB
		
	
	
	
	
		
			Executable File
		
	
	
	
	
			
		
		
	
	
			14 KiB
		
	
	
	
	
		
			Executable File