mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	 77d5e9a76a
			
		
	
	77d5e9a76a
	
	
	
		
			
			* ggml: dynamic x86_64 feature detection for FP32 <-> FP16/BF16 conversion * move fp converter to ggml-cpu * Switch ggml_compute_forward_get_rows_f16/bf16 to new ggml_cpu_fp16/bf16_to_fp32
		
			
				
	
	
	
		
			7.1 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	
			7.1 KiB