mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	ggml-cpu: Faster IQ1 mul_mat_vec on AVX2 using BMI2 instructions (#12154)
* ggml-cpu: Faster IQ1 mul_mat_vec on AVX2 using BMI2 instructions * cmake: Add GGML_BMI2 build option * ggml: enable BMI2 on relevant CPU variants * ggml-cpu: include BMI2 in backend score * ggml-cpu: register BMI2 in ggml_backend_cpu_get_features * ggml-cpu: add __BMI2__ define when using MSVC
This commit is contained in:
		| @@ -80,6 +80,7 @@ extern "C" { | ||||
|     GGML_BACKEND_API int ggml_cpu_has_avx        (void); | ||||
|     GGML_BACKEND_API int ggml_cpu_has_avx_vnni   (void); | ||||
|     GGML_BACKEND_API int ggml_cpu_has_avx2       (void); | ||||
|     GGML_BACKEND_API int ggml_cpu_has_bmi2       (void); | ||||
|     GGML_BACKEND_API int ggml_cpu_has_f16c       (void); | ||||
|     GGML_BACKEND_API int ggml_cpu_has_fma        (void); | ||||
|     GGML_BACKEND_API int ggml_cpu_has_avx512     (void); | ||||
|   | ||||
		Reference in New Issue
	
	Block a user
	 Rémy O
					Rémy O