Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						6609507a91 
					 
					
						
						
							
							ci : fix windows build and release ( #14431 )  
						
						 
						
						
						
						
							
  b5769
 
						
					 
					
						2025-06-28 09:57:07 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Jeff Bolz 
							
						 
					 
					
						
						
							
						
						ceb1bf5a34 
					 
					
						
						
							
							vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO ( #14427 )  
						
						 
						
						... 
						
						
						
						This setting needs to be passed through to vulkan-shaders-gen 
						
						
							
						
					 
					
						2025-06-27 22:35:30 -05:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						72babea5de 
					 
					
						
						
							
							graph : make llm_graph_context destructor virtual ( #14410 )  
						
						 
						
						... 
						
						
						
						ggml-ci 
						
						
							
						
					 
					
						2025-06-27 21:42:02 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						43678060c1 
					 
					
						
						
							
							recurrent : call balloc split_reset() in init_batch() ( #14414 )  
						
						 
						
						... 
						
						
						
						ggml-ci 
						
						
							
						
					 
					
						2025-06-27 17:55:45 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Radoslav Gerganov 
							
						 
					 
					
						
						
							
						
						8d94219a4a 
					 
					
						
						
							
							ggml : add ggml_set_rows ( #14274 )  
						
						 
						
						... 
						
						
						
						* ggml : add ggml_set_rows
Add ggml_set_rows(a, b, c) which copies rows from 'b' into 'a' using
indices from 'c'.
ref: #8366 
* use I64 for indices
* ggml : add repeat impl for i64
* ggml : add ggml_is_contiguous_rows
* ggml : ggml_set_rows support broadcast
* ggml : ggml_set_rows support quantized dst
ggml-ci
* ggml : support GGML_TYPE_F32 ".from_float" trait
* ggml : ggml_set_rows update comment + better index name
* tests : add ggml_set_rows
* metal : add ggml_set_rows implementation
ggml-ci
* ggml : simplify forward_dup_f32
* ggml : fix supports_op
* tests : add comment to set_rows
* ggml : leave the repeat_i64 for a separate PR
ggml-ci
* ggml : set_rows use std::min instead of MIN
* ggml : better error message for set_rows unsupported type
* metal : perform op->type check only once
* tests : more consistent implementation + more tests
ggml-ci
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com > 
						
						
							
						
					 
					
						2025-06-27 16:41:40 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						f667f1e624 
					 
					
						
						
							
							convert : fix broken sentencepiece vocab ( #14416 )  
						
						 
						
						
						
						
							
						
					 
					
						2025-06-27 10:42:19 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Xuan-Son Nguyen 
							
						 
					 
					
						
						
							
						
						8846aace49 
					 
					
						
						
							
							model : gemma3n text-only ( #14400 )  
						
						 
						
						... 
						
						
						
						* gemma3n
* add llm_graph_input_one 
						
						
							
						
					 
					
						2025-06-26 20:34:02 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								bandoti 
							
						 
					 
					
						
						
							
						
						a01047b041 
					 
					
						
						
							
							cmake: regen vulkan shaders when shaders-gen sources change ( #14398 )  
						
						 
						
						... 
						
						
						
						* Add shaders-gen sources as target deps 
						
						
							
						
					 
					
						2025-06-26 13:46:53 -03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						b25346221d 
					 
					
						
						
							
							llama : return mistral-v7-tekken as default template only ( #14390 )  
						
						 
						
						
						
						
							
						
					 
					
						2025-06-26 15:01:14 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						e8215dbb96 
					 
					
						
						
							
							metal : add special-case mat-vec mul for ne00 == 4 ( #14385 )  
						
						 
						
						... 
						
						
						
						ggml-ci 
						
						
							
  b5760
 
						
					 
					
						2025-06-26 15:51:19 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						5783ae4359 
					 
					
						
						
							
							metal : batch rows copy in a single threadgroup ( #14384 )  
						
						 
						
						... 
						
						
						
						* metal : batch rows copy in a single threadgroup
ggml-ci
* metal : handle some edge cases when threadgroup size is not a power of 2
ggml-ci 
						
						
							
  b5759
 
						
					 
					
						2025-06-26 15:50:15 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Aaron Teo 
							
						 
					 
					
						
						
							
						
						bf5bcd0b85 
					 
					
						
						
							
							docs: update s390x documentation + add faq ( #14389 )  
						
						 
						
						... 
						
						
						
						* docs: update s390x documentation + add faq
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* docs: add s390x z17 build q&a
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
---------
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com > 
						
						
							
						
					 
					
						2025-06-26 12:41:41 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								R0CKSTAR 
							
						 
					 
					
						
						
							
						
						716301d1b0 
					 
					
						
						
							
							musa: enable fp16 mma (all) and cublas on qy2 ( #13842 )  
						
						 
						
						... 
						
						
						
						* musa: enable fp16 mma (all) and cublas on qy2
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* Update ggml/src/ggml-cuda/ggml-cuda.cu
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
* Address review comments
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* Address review comments
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* musa: disable MUL_MAT_ID (q2_k × f32) due to precision issues
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
---------
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
Co-authored-by: Johannes Gäßler <johannesg@5d6.de > 
						
						
							
  b5757
 
						
					 
					
						2025-06-26 12:11:59 +08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Aaron Teo 
							
						 
					 
					
						
						
							
						
						60ef23d6c1 
					 
					
						
						
							
							ggml-cpu: enable IBM NNPA Vector Intrinsics ( #14317 )  
						
						 
						
						... 
						
						
						
						* ggml-cpu: add nnpa compile flag
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
(cherry picked from commit 4a9f60c201 )
* ggml-cpu: add fp16->fp32 nnpa first
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
(cherry picked from commit 8d4a7987f9 )
* ggml-cpu: add fp32->fp16
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
(cherry picked from commit 0ff0d65162 )
* ggml-cpu: better variable names
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
(cherry picked from commit 2f58bbcbb8 )
* docs: update s390x docs
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
(cherry picked from commit 01b929491b )
* ggml-cpu: add debugging prints to see if dlf16 is correct
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: fix print vs printf
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: fix float placeholder
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: ensure fp16 and fp32 load and stores are called
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: fp16 load ensured to hit
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: remove sigint from fp16 store
for some reason, the function is not getting a hit when debugged with
    gdb. we will need to investigate further
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: activate nnpa for ggml_cpu_fp16_to_fp32
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: nnpa activate ggml_cpu_fp16_to_fp32 for 8 elements
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: nnpa switch to vec_xst test
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: switch to vec_xst for 4 element loops also
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: rework noop
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: remove noop, general code cleanup
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: clarify variable naming
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: activate nnpa for ggml_cpu_fp32_to_fp16
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: add breakpoint for debugging
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: test fix for conversion failure
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: disable fp32->fp16 nnpa conversions for now
there are some conversion failures in nnpa that requires the eyes of an
ibm stsm. will create a separate pr to introduce the fp32->fp16 change.
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: switch to elif macro
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: reattempt fp32->fp16
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: fix typo
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: reattempt fp32->fp16
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: fix compiler types
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: change to typedef vector types
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: add 4 element loops for fp32->fp16
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: clarified vector naming
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: bring back fp32->fp16 store nnpa
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: activate nnpa fp32->fp16 or fp16->fp32 compute
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: add nnpa macro check in ggml-impl
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: add missing __func__
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: diagnose why __NNPA__ macro is not being defined
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: import vecintrin.h to fix compiler errors
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: update macro tests
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: move s390x typedef to own header file
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* Revert "ggml-cpu: move s390x typedef to own header file"
This reverts commit 157f856c34 .
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: switch to importing ggml-cpu-impl instead
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: fix macro declaration
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: test more macros
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: add debug prints
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: bruteforce macro definitions
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: move macro definitions
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: add ggml-impl.h to cmakelists
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: switch to private macros
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: move s390x typedef to own header file
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
(cherry picked from commit 157f856c34 )
* ggml-cpu: move things around
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: bring back compile macros
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: switch to quotes for import
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: add compiler error macro
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: add s390x detection in ggml-src
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: bring back compile definitions
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: undo cmakelists work
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* Revert "ggml-cpu: move s390x typedef to own header file"
This reverts commit 18d79e1a30 .
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: remove typedefs.h
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: remove typedef from cmakelists
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: add ggml-impl.h future notes
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: add todo comment for future reference
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: clarify naming of dlf16
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: remove unnecessary target compile definitions
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: move nnpa fp16->fp32 and fp32->fp16 to simd-mappings
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: refactor fp32->fp16 and fp16->fp32 simd to ggml-cpu
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* docs: update broken huggingface link for s390x
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: fix duplicate func names during compile
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* Revert "ggml-cpu: fix duplicate func names during compile"
This reverts commit fbb733451f .
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* Revert "ggml: refactor fp32->fp16 and fp16->fp32 simd to ggml-cpu"
This reverts commit bd288e8fa5 .
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: refactor fp16<->fp32 simd to ggml-cpu
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: fix missing simd-mappings.h import in quants.c
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: fix missing simd-mappings.h within repack
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: fix amx mmq missing simd-mappings.h
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: attempt at fixing loongarch failing build
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: move nnpa together with other fp16<->fp32 simd
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: fix wrong refactor of ggml-base
ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164176555 
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: remove dependency on ggml-cpu from ggml-base
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: rename all fp16<->fp32 macros to prefix with ggml_cpu
ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164449406 
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: remove mistaken fallback macro
fallback logic was already implemented but i was too sleepy to realise
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: move ggml_table_f32_f16 to ggml-cpu
ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164775006 
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: move ggml_table_f32_f16 back to ggml-base due to ci failures
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* Revert "ggml-cpu: move ggml_table_f32_f16 back to ggml-base due to ci failures"
This reverts commit 32a3533564 .
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* Revert "ggml: move ggml_table_f32_f16 to ggml-cpu"
This reverts commit 9e40d984ad .
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: move ggml_table_f32_f16 to ggml-cpu
ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164775006 
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
(cherry picked from commit 9e40d984ad )
* ggml: move ggml_table_f32_f16 to ggml-cpu.c
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: extern c ggml_table_f32_f16 + chore docs
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: dedup ggml_table_f32_f16 from simd-mappings.h
we rely on the variable declaration in ggml-cpu.c instead
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* Revert "ggml-cpu: dedup ggml_table_f32_f16 from simd-mappings.h"
This reverts commit f71b21d2f7 .
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml-cpu: bring back ggml_table_f32_f16
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* Revert "ggml-cpu: bring back ggml_table_f32_f16"
This reverts commit 2dce119178 .
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* fix ggml time initialization
* fix f32_f16 table init
* remove extra line
---------
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
Co-authored-by: slaren <slarengh@gmail.com > 
						
						
							
  b5756
 
						
					 
					
						2025-06-25 23:49:04 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						b193d53069 
					 
					
						
						
							
							ggml : do not output unprintable characters on GGUF load failure ( #14381 )  
						
						 
						
						
						
						
							
  b5755
 
						
					 
					
						2025-06-25 23:26:51 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Anton Mitkov 
							
						 
					 
					
						
						
							
						
						2bf9d539dd 
					 
					
						
						
							
							sycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices ( #13973 )  
						
						 
						
						
						
						
							
  b5754
 
						
					 
					
						2025-06-25 18:09:55 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								lhez 
							
						 
					 
					
						
						
							
						
						73e53dc834 
					 
					
						
						
							
							opencl: ref count ggml_backend_opencl_context and refactor profiling ( #14254 )  
						
						 
						
						... 
						
						
						
						* Move profiling info into `ggml_backend_opencl_context`
* Add `enqueue_ndrange_kernel` to launch kernel 
						
						
							
  b5753
 
						
					 
					
						2025-06-24 11:46:25 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						62af464227 
					 
					
						
						
							
							batch : fix check for empty sequences in memory ( #14364 )  
						
						 
						
						... 
						
						
						
						* batch : fix check for empty sequences in memory
ggml-ci
* cont : reuse the var
ggml-ci 
						
						
							
  b5752
 
						
					 
					
						2025-06-24 18:26:30 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Mathieu Baudier 
							
						 
					 
					
						
						
							
						
						c148cf1946 
					 
					
						
						
							
							cmake : use LLAMA_BUILD_NUMBER when defining LLAMA_INSTALL_VERSION ( #14362 )  
						
						 
						
						
						
						
							
  b5751
 
						
					 
					
						2025-06-24 15:05:31 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Nigel Bosch 
							
						 
					 
					
						
						
							
						
						1b809cee22 
					 
					
						
						
							
							server : move no API key doc to /health ( #14352 )  
						
						 
						
						
						
						
							
						
					 
					
						2025-06-24 10:59:11 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						abf241045d 
					 
					
						
						
							
							main : honor --verbose-prompt on interactive prompts ( #14350 )  
						
						 
						
						
						
						
							
  b5749
 
						
					 
					
						2025-06-24 09:31:00 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Bartowski 
							
						 
					 
					
						
						
							
						
						901e20bbe5 
					 
					
						
						
							
							jinja : Add Mistral-Small-3.2-24B-Instruct-2506.jinja ( #14349 )  
						
						 
						
						... 
						
						
						
						This will allow the use of tools on the llama-server 
						
						
							
						
					 
					
						2025-06-24 09:17:58 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								uvos 
							
						 
					 
					
						
						
							
						
						0142961a2e 
					 
					
						
						
							
							CUDA/HIP: optimize mmv paths taken for HIP devices ( #14324 )  
						
						 
						
						... 
						
						
						
						Co-authored-by: Johannes Gäßler <johannesg@5d6.de > 
						
						
							
  b5747
 
						
					 
					
						2025-06-24 01:12:56 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								bandoti 
							
						 
					 
					
						
						
							
						
						ce82bd0117 
					 
					
						
						
							
							ci: add workflow for relocatable cmake package ( #14346 )  
						
						 
						
						
						
						
							
						
					 
					
						2025-06-23 15:30:51 -03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Jeff Bolz 
							
						 
					 
					
						
						
							
						
						bf2a99e3cb 
					 
					
						
						
							
							vulkan: update windows SDK in release.yml ( #14344 )  
						
						 
						
						
						
						
							
  b5745
 
						
					 
					
						2025-06-23 15:44:48 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Molly Sophia 
							
						 
					 
					
						
						
							
						
						72c6bc3f3d 
					 
					
						
						
							
							llama : better rwkv chat template and add missing inputs.use_jinja setting ( #14336 )  
						
						 
						
						... 
						
						
						
						* llama-cli : add missing `inputs.use_jinja` setting
Signed-off-by: Molly Sophia <mollysophia379@gmail.com >
* llama : better legacy chat template for rwkv
Signed-off-by: Molly Sophia <mollysophia379@gmail.com >
---------
Signed-off-by: Molly Sophia <mollysophia379@gmail.com > 
						
						
							
  b5744
 
						
					 
					
						2025-06-23 19:56:19 +08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						defe2158dd 
					 
					
						
						
							
							CUDA: mul_mat_v support for batch sizes > 1 ( #14262 )  
						
						 
						
						... 
						
						
						
						* CUDA: mul_mat_v support for batch sizes > 1
* use 64 bit math for initial offset calculation 
						
						
							
  b5743
 
						
					 
					
						2025-06-23 13:11:31 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						7b50d589a8 
					 
					
						
						
							
							kv-cells : fix tracking of seq_pos ( #14339 )  
						
						 
						
						... 
						
						
						
						* kv-cells : fix tracking of seq_pos during cache reuse
ggml-ci
* cont : improve error message
ggml-ci
* cont : add more comments 
						
						
							
  b5742
 
						
					 
					
						2025-06-23 12:27:35 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Jeff Bolz 
							
						 
					 
					
						
						
							
						
						3a9457df96 
					 
					
						
						
							
							vulkan: update windows SDK in CI ( #14334 )  
						
						 
						
						
						
						
							
						
					 
					
						2025-06-23 10:19:24 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Ed Addario 
							
						 
					 
					
						
						
							
						
						fa4a9f2a1c 
					 
					
						
						
							
							quantize : handle user-defined pruning of whole layers (blocks) ( #13037 )  
						
						 
						
						
						
						
							
  b5740
 
						
					 
					
						2025-06-22 23:16:26 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						238005c2dc 
					 
					
						
						
							
							gguf-py : fix SpecialVocab parsing when post_processor is null ( #14330 )  
						
						 
						
						
						
						
							
						
					 
					
						2025-06-22 19:46:17 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Ruikai Peng 
							
						 
					 
					
						
						
							
						
						66aba7aca9 
					 
					
						
						
							
							run : avoid double tokenization ( #14327 )  
						
						 
						
						... 
						
						
						
						* run : avoid double tokenization by adopting common_tokenize heuristic
* build : fix windows gcc and clang warnings
* lint : fixed trailing whitepace
* run : fix is_first flag 
						
						
							
  b5738
 
						
					 
					
						2025-06-23 01:28:06 +08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						f1f5e82df6 
					 
					
						
						
							
							examples : fix is_first logic for tokenization ( #14329 )  
						
						 
						
						... 
						
						
						
						ggml-ci 
						
						
							
  b5737
 
						
					 
					
						2025-06-22 20:10:07 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								uvos 
							
						 
					 
					
						
						
							
						
						af3373f1ad 
					 
					
						
						
							
							HIP: enable vec fattn on RDNA4 ( #14323 )  
						
						 
						
						
						
						
							
  b5736
 
						
					 
					
						2025-06-22 16:51:23 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								yuiseki 
							
						 
					 
					
						
						
							
						
						5d5c066de8 
					 
					
						
						
							
							mtmd : fix Pixtral OOM with large images by capping image_size to 1024 ( #14326 )  
						
						 
						
						... 
						
						
						
						Mistral Small 2506 models using Pixtral vision encoder were running out
of GPU memory when processing images larger than 1024x1024 pixels due to
exponential memory growth from unlimited image size.
This fix applies the same 1024x1024 limit used by Qwen2VL models to
prevent OOM issues while maintaining compatibility with existing models. 
						
						
							
  b5735
 
						
					 
					
						2025-06-22 14:44:57 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						40bfa04c95 
					 
					
						
						
							
							common : use std::string_view now that we target c++17 ( #14319 )  
						
						 
						
						
						
						
							
  b5734
 
						
					 
					
						2025-06-22 08:37:43 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Aman Gupta 
							
						 
					 
					
						
						
							
						
						aa064b2eb7 
					 
					
						
						
							
							CUDA: add mean operation ( #14313 )  
						
						 
						
						... 
						
						
						
						* CUDA: add mean operation
* add back sum_rows_f32_cuda
* Review: early exit if col!=0 
						
						
							
  b5733
 
						
					 
					
						2025-06-22 12:39:54 +08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						aa0ef5c578 
					 
					
						
						
							
							gguf-py : fix Qwen3-Embedding eos token ( #14314 )  
						
						 
						
						
						
						
							
						
					 
					
						2025-06-21 18:12:05 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Markus Tavenrath 
							
						 
					 
					
						
						
							
						
						bb16041cae 
					 
					
						
						
							
							Add support for VK_EXT_debug_utils to add labels to Vulkan objects. ( #13792 )  
						
						 
						
						... 
						
						
						
						* Add support for VK_EXT_debug_utils to add labels to Vulkan objects. In step 1 compute pipelines are getting labeled.
* remove #ifdef for debug utils and add queue marker. 
						
						
							
  b5731
 
						
					 
					
						2025-06-21 08:17:12 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						58cba76a9a 
					 
					
						
						
							
							gguf-py : fix TemplateProcessing pair when bos/eos is missing ( #14312 )  
						
						 
						
						
						
						
							
						
					 
					
						2025-06-21 07:33:21 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						67ae5312e2 
					 
					
						
						
							
							metal : fix thread-safety ( #14300 )  
						
						 
						
						... 
						
						
						
						ggml-ci 
						
						
							
  b5729
 
						
					 
					
						2025-06-21 08:04:18 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						692e3cdd0a 
					 
					
						
						
							
							memory : rename interface to llama_memory_context_i ( #14296 )  
						
						 
						
						... 
						
						
						
						* memory : rename interface to llama_memory_context_i
ggml-ci
* cont : fix comments
* cont : use "mctx" for referencing a memory context
ggml-ci 
						
						
							
  b5728
 
						
					 
					
						2025-06-21 08:03:46 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Daniel Han 
							
						 
					 
					
						
						
							
						
						b23fa0b3f4 
					 
					
						
						
							
							convert : fix Llama 4 conversion ( #14311 )  
						
						 
						
						
						
						
							
						
					 
					
						2025-06-21 06:32:01 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						06cbedfca1 
					 
					
						
						
							
							sync : ggml  
						
						 
						
						... 
						
						
						
						ggml-ci 
						
						
							
  b5726
 
						
					 
					
						2025-06-20 21:02:47 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Acly 
							
						 
					 
					
						
						
							
						
						b7147673f2 
					 
					
						
						
							
							Add ggml_roll (ggml/1274)  
						
						 
						
						... 
						
						
						
						* ggml : add ggml_roll
* use set/get_op_params & std::min 
						
						
							
						
					 
					
						2025-06-20 21:02:47 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								David Chiu 
							
						 
					 
					
						
						
							
						
						d860dd99a4 
					 
					
						
						
							
							docs : fix the link to llama.h ( #14293 )  
						
						 
						
						
						
						
							
						
					 
					
						2025-06-20 19:43:35 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Aman Gupta 
							
						 
					 
					
						
						
							
						
						c959f462a0 
					 
					
						
						
							
							CUDA: add conv_2d_transpose ( #14287 )  
						
						 
						
						... 
						
						
						
						* CUDA: add conv_2d_transpose
* remove direct include of cuda_fp16
* Review: add brackets for readability, remove ggml_set_param and add asserts 
						
						
							
  b5723
 
						
					 
					
						2025-06-20 22:48:24 +08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						22015b2092 
					 
					
						
						
							
							lint : remove trailing whitepace ( #14304 )  
						
						 
						
						
						
						
							
  b5722
 
						
					 
					
						2025-06-20 16:37:44 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Ruikai Peng 
							
						 
					 
					
						
						
							
						
						dd6e6d0b6a 
					 
					
						
						
							
							vocab : prevent tokenizer overflow ( #14301 )  
						
						 
						
						... 
						
						
						
						* vocab : prevent stack overflow in tokenize
* vocab : return error instead of aborting on oversized token count
* vocab : INT32_MIN from llama_tokenize on overflow 
						
						
							
  b5721
 
						
					 
					
						2025-06-20 07:13:06 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Nicolò Scipione 
							
						 
					 
					
						
						
							
						
						8308f98c7f 
					 
					
						
						
							
							sycl: add usage of enqueue_functions extension ( #14244 )  
						
						 
						
						... 
						
						
						
						* Add header and namespace to use enqueue_functions extension
* Convert submit and parallel_for to use new extension in convert.cpp
* Convert submit and parallel_for to use extension in ggml-sycl.cpp
* Convert submit and parallel_for to use extension in gla.cpp
* Convert submit and parallel_for in mmq.cpp
* Convert submit and parallel_for in mmvq.cpp
* Convert submit and parallel_for in remaining files
* Convert all simple parallel_for to nd_launch from enqueue_functions
extension
* Wrapping extension in general function
Create a general function that enable the enqueue_functions extension if
it is enable in the compiler, otherwise call the general SYCL function
to launch kernels.
---------
Signed-off-by: nscipione <nicolo.scipione@codeplay.com > 
						
						
							
  b5720
 
						
					 
					
						2025-06-20 15:07:21 +02:00