mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	ggml : add ggml_cpu_has_avx_vnni() (#4589)
* feat: add avx_vnni based on intel documents * ggml: add avx vnni based on intel document * llama: add avx vnni information display * docs: add more details about using oneMKL and oneAPI for intel processors * docs: add more details about using oneMKL and oneAPI for intel processors * docs: add more details about using oneMKL and oneAPI for intel processors * docs: add more details about using oneMKL and oneAPI for intel processors * docs: add more details about using oneMKL and oneAPI for intel processors * Update ggml.c Fix indentation upgate Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
This commit is contained in:
		| @@ -10780,6 +10780,7 @@ const char * llama_print_system_info(void) { | ||||
|  | ||||
|     s  = ""; | ||||
|     s += "AVX = "         + std::to_string(ggml_cpu_has_avx())         + " | "; | ||||
|     s += "AVX_VNNI = "    + std::to_string(ggml_cpu_has_avx_vnni())    + " | "; | ||||
|     s += "AVX2 = "        + std::to_string(ggml_cpu_has_avx2())        + " | "; | ||||
|     s += "AVX512 = "      + std::to_string(ggml_cpu_has_avx512())      + " | "; | ||||
|     s += "AVX512_VBMI = " + std::to_string(ggml_cpu_has_avx512_vbmi()) + " | "; | ||||
|   | ||||
		Reference in New Issue
	
	Block a user
	 automaticcat
					automaticcat