Olivier Chafik 
							
						 
					 
					
						
						
							
						
						d7cfe1ffe0 
					 
					
						
						
							
							docs: add docs/function-calling.md to lighten server/README.md's plight ( #12069 )  
						
						
						
						
							
						
					 
					
						2025-02-25 18:52:56 +00:00 
						 
				 
			
				
					
						
							
							
								Jeff Bolz 
							
						 
					 
					
						
						
							
						
						a82c9e7c23 
					 
					
						
						
							
							vulkan: fix assertion when qy_needs_dequant ( #12068 )  
						
						... 
						
						
						
						Looks like a copy/paste bug from qx_needs_dequant. 
						
						
							
 
						
					 
					
						2025-02-25 16:30:21 +01:00 
						 
				 
			
				
					
						
							
							
								rhjdvsgsgks 
							
						 
					 
					
						
						
							
						
						401af80b54 
					 
					
						
						
							
							server: handle echo=false on /v1/completions ( #12060 )  
						
						
						
						
							
 
						
					 
					
						2025-02-25 12:52:52 +01:00 
						 
				 
			
				
					
						
							
							
								Judd 
							
						 
					 
					
						
						
							
						
						c132239bfb 
					 
					
						
						
							
							add OP sigmoid ( #12056 )  
						
						... 
						
						
						
						Co-authored-by: Judd <foldl@boxvest.com > 
						
						
							
 
						
					 
					
						2025-02-25 12:32:20 +01:00 
						 
				 
			
				
					
						
							
							
								Molly Sophia 
							
						 
					 
					
						
						
							
						
						393fca629e 
					 
					
						
						
							
							ggml-cpu: Fix build with sve ( #12059 )  
						
						... 
						
						
						
						* ggml-cpu: Fix build with sve
Signed-off-by: Molly Sophia <mollysophia379@gmail.com >
* ggml-cpu: Remove unused variable in sve q3_k vec dot
Signed-off-by: Molly Sophia <mollysophia379@gmail.com >
---------
Signed-off-by: Molly Sophia <mollysophia379@gmail.com > 
						
						
							
 
						
					 
					
						2025-02-25 19:28:22 +08:00 
						 
				 
			
				
					
						
							
							
								Rémy O 
							
						 
					 
					
						
						
							
						
						61d4f39dfe 
					 
					
						
						
							
							vulkan: implement more backpropagation operators ( #11914 )  
						
						... 
						
						
						
						* vulkan: implement GGML_OP_ROPE_BACK
* vulkan: implement GGML_OP_RMS_NORM_BACK
* vulkan: implement GGML_OP_SILU_BACK
* vulkan: implement GGML_OP_SOFTMAX_BACK 
						
						
							
 
						
					 
					
						2025-02-25 12:04:45 +01:00 
						 
				 
			
				
					
						
							
							
								Olivier Chafik 
							
						 
					 
					
						
						
							
						
						0b52745649 
					 
					
						
						
							
							server: support add_generation_prompt query param ( #12062 )  
						
						
						
						
							
 
						
					 
					
						2025-02-25 10:40:22 +00:00 
						 
				 
			
				
					
						
							
							
								Alex Brooks 
							
						 
					 
					
						
						
							
						
						4d1051a40f 
					 
					
						
						
							
							Add Doc for Converting Granite Vision -> GGUF ( #12006 )  
						
						... 
						
						
						
						* Add example docs for granite vision
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com > 
						
						
							
						
					 
					
						2025-02-25 10:46:05 +01:00 
						 
				 
			
				
					
						
							
							
								Vitali Lovich 
							
						 
					 
					
						
						
							
						
						3e9a2860e9 
					 
					
						
						
							
							llama : expose llama_model_n_head_kv in the API ( #11997 )  
						
						... 
						
						
						
						It's useful to be able to have this from the library layer as it's a key
parameter of the model (e.g. to figure out how much KV cache memory is
needed). 
						
						
							
 
						
					 
					
						2025-02-25 11:29:33 +02:00 
						 
				 
			
				
					
						
							
							
								Gian-Carlo Pascutto 
							
						 
					 
					
						
						
							
						
						58d07a8043 
					 
					
						
						
							
							metal : copy kernels for quant to F32/F16 conversions ( #12017 )  
						
						... 
						
						
						
						metal: use dequantize_q templates
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com > 
						
						
							
 
						
					 
					
						2025-02-25 11:27:58 +02:00 
						 
				 
			
				
					
						
							
							
								lhez 
							
						 
					 
					
						
						
							
						
						34a846b584 
					 
					
						
						
							
							opencl: fix for small models ( #11950 )  
						
						... 
						
						
						
						* opencl: fix small shape gemv, remove unused extensions
* opencl: fix `transpose_16`, `dump_tensor`, enforce subgroup size
* opencl: fix for token length < 4
* opencl: use wave size of 64 for all Adreno GPUs
---------
Co-authored-by: Shawn Gu <quic_shawngu@quicinc.com >
Co-authored-by: Skyler Szot <quic_sszot@quicinc.com > 
						
						
							
 
						
					 
					
						2025-02-24 14:47:07 -07:00 
						 
				 
			
				
					
						
							
							
								Alex Brooks 
							
						 
					 
					
						
						
							
						
						7a2c913e66 
					 
					
						
						
							
							llava : Add Granite Vision Support ( #11794 )  
						
						... 
						
						
						
						* Add super wip scripts for multimodal granite gguf
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Add example for converting mmgranite to gguf
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* remove hardcoded path
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Add vision feature layer to gguf params
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Clean up llava surgery and remove name substitution hacks
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Add transformers llava next tensor name mapping
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Make siglip / openclip mutuall exclusive
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Fix projector linear substitution
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Fix linear 2 substitution index
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Increase max flattened gridpoints to 64
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Fix hardcoded concat for multiple feature layers
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Pull vision feature layers out of gguf keys
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* fix num gridpoints and use all layers
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Avoid dropping last image encoder layer in llava models
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Use 10 for max number of patches
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Standardize vision feature layers
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Cleanup logs
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Update comment for vision feature layer init
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Update notes for alternative to legacy llm conversion script
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Fix notes rendering
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Add v prefix to vision feature layer log
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Use current defaults for feature layer
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Use constant for max gridpoints / feat layers, style fixes
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* clarify non-negative feature layers
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Remove CLIP_API from func signature
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* USE MAX_IMAGE_FEATURE_LAYERS const in layer calc
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Clarify feature layers are non negative ints and not uint
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Fix condition for reading feature layers
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* pop last llava layer when feature layers are unset
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Fix unset vision layer 0
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Update examples/llava/clip.cpp
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com >
* Reenable assertion for out of bounds get_rows
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Use std vector for gridpoints and feature layers
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Caculate max feature layer at load time
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Include base patch for granite vision allocation
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Fix trailing whitespace
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Add max num patches = 10 back for minicpmv
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Use unordered set to store feature layers
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com >
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Use max feature layer for postnorm
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
* Apply suggestions from code review
---------
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com >
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com > 
						
						
							
 
						
					 
					
						2025-02-24 17:09:51 +01:00 
						 
				 
			
				
					
						
							
							
								Neo Zhang Jianyu 
							
						 
					 
					
						
						
							
						
						08d5986290 
					 
					
						
						
							
							[SYCL] Optimize mul_mat for Q4_0 on Intel GPU ( #12035 )  
						
						... 
						
						
						
						* opt performance by reorder for Intel GPU
* detect hw type and save opt feature, and print opt feature
* correct name
* support optimize graph once when compute graph, record the opt status in tensor->extra, make CI passed
* add env variable GGML_SYCL_DISABLE_OPT for debug
* use syclex::architecture replace the custom hw define, update the guide for GGML_SYCL_DISABLE_OPT
* add performance data
* mv getrows functions to separeted files
* fix global variables
---------
Co-authored-by: arthw <14088817+arthw@users.noreply.github.com > 
						
						
							
 
						
					 
					
						2025-02-24 22:33:23 +08:00 
						 
				 
			
				
					
						
							
							
								Aleksei Nikiforov 
							
						 
					 
					
						
						
							
						
						651adf4b66 
					 
					
						
						
							
							gguf_convert_endian.py: implement byteswapping for q4_k and q6_k ( #11349 )  
						
						
						
						
							
						
					 
					
						2025-02-24 11:27:01 +00:00 
						 
				 
			
				
					
						
							
							
								Akarshan Biswas 
							
						 
					 
					
						
						
							
						
						8303e8b0fb 
					 
					
						
						
							
							SYCL: Fix GGML_SYCL_DEBUG macro ( #11995 )  
						
						
						
						
							
 
						
					 
					
						2025-02-24 10:18:25 +00:00 
						 
				 
			
				
					
						
							
							
								Florent BENOIT 
							
						 
					 
					
						
						
							
						
						7ad0779f5d 
					 
					
						
						
							
							run: allow to customize prompt by env var LLAMA_PROMPT_PREFIX ( #12041 )  
						
						... 
						
						
						
						Signed-off-by: Florent Benoit <fbenoit@redhat.com > 
						
						
							
 
						
					 
					
						2025-02-23 17:15:51 +00:00 
						 
				 
			
				
					
						
							
							
								Eric Curtin 
							
						 
					 
					
						
						
							
						
						f777a73e18 
					 
					
						
						
							
							Some llama-run cleanups ( #11973 )  
						
						... 
						
						
						
						Use consolidated open function call from File class. Change
read_all to to_string(). Remove exclusive locking, the intent for
that lock is to avoid multiple processes writing to the same file,
it's not an issue for readers, although we may want to consider
adding a shared lock. Remove passing nullptr as reference,
references are never supposed to be null. clang-format the code
for consistent styling.
Signed-off-by: Eric Curtin <ecurtin@redhat.com > 
						
						
							
 
						
					 
					
						2025-02-23 13:14:32 +00:00 
						 
				 
			
				
					
						
							
							
								Aaron Teo 
							
						 
					 
					
						
						
							
						
						af7747c95a 
					 
					
						
						
							
							ggml-cpu: Support s390x SIMD Instruction Set ( #12019 )  
						
						... 
						
						
						
						* ggml: add s390x ARCH_FLAGS for compilation
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: add SIMD for s390x using vector intrinsics
SIMD is activated for:
* ggml_vec_dot_f32
* ggml_vec_dot_f16
* ggml_vec_mad_f32
* ggml_vec_mad_f16
* ggml_vec_mad_f32_unroll
* ggml_vec_scale_f32
* ggml_vec_scale_f16
SIMD is NOT activated for:
* ggml_vec_dot_f16_unroll (pending bugfix)
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: fix missing escape character in GGML_F32x4_REDUCE
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: add temporary patch for GGML_F32_ARR and GGML_F16_ARR
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: fix s390x GGML_F32x4_REDUCE
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: full SIMD activation for F32,F16 s390x
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: add option to disable s390x VXE/VXE2
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: change vecintrin.h include to ggml-cpu-impl
* add __VXE__ and __VXE2__ macros
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* cmake: add s390x target detection for VX/VXE/VXE2
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: move s390x vector intrinsics to ggml-cpu-impl.h
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: s390x Q8_0 SIMD
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: correct documentation for Q8_0
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: s390x reduce code complexity Q8_0
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: s390x bugfix typo Q8_0
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: s390x SIMD activated for Q4_1
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: s390x inline vec_reve
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: s390x SIMD activation for Q4_0
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: add VXE backend feature
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: remove test.py
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: s390x SIMD activation for quantize_row_q8_0
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: s390x SIMD activation for quantize_row_q8_1
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: s390x SIMD activation for iq4_xs
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: bugfix iq4_xs
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: s390x SIMD activation for iq4_nl
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: add float, double, and long vector data type
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: clean up iq4_xs SIMD
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: fix improper use of restrict keyword
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: update warning message for ggml_vec_tbl
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: untested implementation of ggml_vec_dot_iq2_xxs_q8_K
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: update ggml_vec_dot_q4_1_q8_1 to use typedefs
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: switch to restrict for iq4_nl
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: slight dot product speed improvement for q4_1_q8_1
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: s390x SIMD activation for q6_K
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: add missing `_t` to ggml_int8x16x4_t
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: fix missing `_t` for ggml_vec_xl_s8x4
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: fix more missing `_t`
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: add unroll and prefetch to Q8_0
increase of 3.86% for prompt processing and 32.22% for token generation
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: patch Q8_0 to use proper vector sizes
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: optimise Q8_0 dot prod compute kernel further
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: add unroll and prefetch to Q4_1
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: refactor Q6_K variable naming for readability
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: fix Q6_K typos
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: s390x SIMD activation for Q5_K
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: fix wrong char*x16_t naming
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: Q5_K y0 wrong signness
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: fix Q5_K invalid uchar type
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: fix Q5_K invalid uchar type
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: s390x SIMD activation for Q4_K
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: fix Q4_K invalid vector intrinsics
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: simplify ggml_padd_s16 compute kernel
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: correct ggml-cpu vxe wording
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: change ggml_aligned_malloc alignment to 256
256 is the cache line size for s390x platforms
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: resolve pr merge via cherry-pick 225bbbfaaron.teo1@ibm.com >
* ggml : fix LoongArch compile error with 128-bit SIMD (#11701 )
* ggml: resolve pr merge via cherry-pick 4571953
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* ggml: cmake remove fork when determining s390x machine type
thank you @ericcurtin
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
---------
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
Co-authored-by: Jinyang He <hejinyang@loongson.cn >
Co-authored-by: junchao-zhao <68935141+junchao-loongson@users.noreply.github.com > 
						
						
							
 
						
					 
					
						2025-02-22 21:39:24 +00:00 
						 
				 
			
				
					
						
							
							
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						a28e0d5eb1 
					 
					
						
						
							
							CUDA: app option to compile without FlashAttention ( #12025 )  
						
						
						
						
							
 
						
					 
					
						2025-02-22 20:44:34 +01:00 
						 
				 
			
				
					
						
							
							
								Ting Lou 
							
						 
					 
					
						
						
							
						
						36c258ee92 
					 
					
						
						
							
							llava: build clip image from pixels ( #11999 )  
						
						... 
						
						
						
						* llava: export function `clip_build_img_from_pixels` to build image from pixels decoded by other libraries instead of stb_image.h for better performance
* Apply suggestions from code review
---------
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com > 
						
						
							
 
						
					 
					
						2025-02-22 15:28:28 +01:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						f3e64859ed 
					 
					
						
						
							
							ci : fix arm upload artifacts ( #12024 )  
						
						... 
						
						
						
						* ci : fix arm upload artifacts
* cont : fix archive name to use matrix 
						
						
							
 
						
					 
					
						2025-02-22 15:03:00 +02:00 
						 
				 
			
				
					
						
							
							
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						5fa07c2f93 
					 
					
						
						
							
							CUDA: optimize FA for GQA + large batches ( #12014 )  
						
						
						
						
							
						
					 
					
						2025-02-22 12:20:17 +01:00 
						 
				 
			
				
					
						
							
							
								Rohanjames1997 
							
						 
					 
					
						
						
							
						
						335eb04a91 
					 
					
						
						
							
							ci : Build on Github-hosted arm64 runners ( #12009 )  
						
						
						
						
							
						
					 
					
						2025-02-22 11:48:57 +01:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						cf756d6e0a 
					 
					
						
						
							
							server : disable Nagle's algorithm ( #12020 )  
						
						
						
						
							
 
						
					 
					
						2025-02-22 11:46:31 +01:00 
						 
				 
			
				
					
						
							
							
								Gian-Carlo Pascutto 
							
						 
					 
					
						
						
							
						
						d70908421f 
					 
					
						
						
							
							cuda: Add Q5_1, Q5_0, Q4_1 and Q4_0 to F32 conversion support. ( #12000 )  
						
						
						
						
							
 
						
					 
					
						2025-02-22 09:43:24 +01:00 
						 
				 
			
				
					
						
							
							
								Daniel Bevenius 
							
						 
					 
					
						
						
							
						
						de8b5a3624 
					 
					
						
						
							
							llama.swiftui : add "Done" dismiss button to help view ( #11998 )  
						
						... 
						
						
						
						The commit updates the help view in the llama.swiftui example to use a
NavigationView and a Done button to dismiss the help view.
The motivation for this is that without this change there is now way to
dimiss the help view. 
						
						
							
 
						
					 
					
						2025-02-22 06:33:29 +01:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						51f311e057 
					 
					
						
						
							
							llama : skip loading unused tensors ( #12004 )  
						
						... 
						
						
						
						* llama : assign unknown/unused tensors to host buffer type
ggml-ci
* llama : skip unused tensors
ggml-ci 
						
						
							
 
						
					 
					
						2025-02-21 18:33:18 +02:00 
						 
				 
			
				
					
						
							
							
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						586d5fe6eb 
					 
					
						
						
							
							doc: update contributing guidelines [no ci] ( #11969 )  
						
						
						
						
							
						
					 
					
						2025-02-21 12:51:25 +01:00 
						 
				 
			
				
					
						
							
							
								PureJourney 
							
						 
					 
					
						
						
							
						
						ecc8e3aeff 
					 
					
						
						
							
							CUDA: correct the lowest Maxwell supported by CUDA 12 ( #11984 )  
						
						... 
						
						
						
						* CUDA: correct the lowest Maxwell supported by CUDA 12
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de > 
						
						
							
 
						
					 
					
						2025-02-21 12:21:05 +01:00 
						 
				 
			
				
					
						
							
							
								Bodhi 
							
						 
					 
					
						
						
							
						
						0b3863ff95 
					 
					
						
						
							
							MUSA: support ARM64 and enable dp4a .etc ( #11843 )  
						
						... 
						
						
						
						* MUSA:  support ARM64 and enable __dp4a .etc
* fix cross entropy loss op for musa
* update
* add cc info log for musa
* add comment for the MUSA .cc calculation block
---------
Co-authored-by: Bodhi Hu <huaishun.hu@mthreads.com > 
						
						
							
						
					 
					
						2025-02-21 09:46:23 +02:00 
						 
				 
			
				
					
						
							
							
								Alex Brooks 
							
						 
					 
					
						
						
							
						
						ee02ad02c5 
					 
					
						
						
							
							clip : fix visual encoders with no CLS ( #11982 )  
						
						... 
						
						
						
						Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com > 
						
						
							
 
						
					 
					
						2025-02-21 08:11:03 +02:00 
						 
				 
			
				
					
						
							
							
								momonga 
							
						 
					 
					
						
						
							
						
						c392e5094d 
					 
					
						
						
							
							server (webui): Fix Premature Submission During IME Conversion ( #11971 )  
						
						... 
						
						
						
						* fix skip ime composing
* fix npm rebuild
* fix warn
---------
Co-authored-by: momonga <115213907+mmnga@users.noreply.github.com >
Co-authored-by: Xuan Son Nguyen <son@huggingface.co > 
						
						
							
						
					 
					
						2025-02-20 19:43:22 +01:00 
						 
				 
			
				
					
						
							
							
								Charles Xu 
							
						 
					 
					
						
						
							
						
						c5d91a7400 
					 
					
						
						
							
							ggml-cpu: Add CPU backend support for KleidiAI library ( #11390 )  
						
						... 
						
						
						
						* ggml-cpu: Add CPU backend support for KleidiAI library
* Add environmental variable GGML_KLEIDIAI_SME
* Add support for multithread LHS conversion
* Switch kernel selection order to dotprod and i8mm
* updates for review comments
* More updates for review comments
* Reorganize and rename KleidiAI files
* Move ggml-cpu-traits.h to source file
* Update cmake for SME build and add alignment for SME
* Remove append GGML_USE_CPU_KLEIDIAI to the GGML_CDEF_PUBLIC list 
						
						
							
 
						
					 
					
						2025-02-20 15:06:51 +02:00 
						 
				 
			
				
					
						
							
							
								Prashant Vithule 
							
						 
					 
					
						
						
							
						
						4806498bf1 
					 
					
						
						
							
							ggml: aarch64: implement SVE kernels for q3_K_q8_K vector dot ( #11917 )  
						
						... 
						
						
						
						* Added SVE Implementation for Q3_K Kernel in ggml-cpu-quants.c file
* Improved Formating of code in  ggml-cpu-quants.c file
* style : minor fixes
* style : less whitespaces
* style : ptr spaceing
---------
Co-authored-by: vithulep <p.m.vithule1517@gmail.com >
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com > 
						
						
							
 
						
					 
					
						2025-02-20 12:08:32 +02:00 
						 
				 
			
				
					
						
							
							
								Michael Engel 
							
						 
					 
					
						
						
							
						
						0d559580a0 
					 
					
						
						
							
							run : add --chat-template-file ( #11961 )  
						
						... 
						
						
						
						Relates to: https://github.com/ggml-org/llama.cpp/issues/11178 
Added --chat-template-file CLI option to llama-run. If specified, the file
will be read and the content passed for overwriting the chat template of
the model to common_chat_templates_from_model.
Signed-off-by: Michael Engel <mengel@redhat.com > 
						
						
							
 
						
					 
					
						2025-02-20 10:35:11 +02:00 
						 
				 
			
				
					
						
							
							
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						d04e7163c8 
					 
					
						
						
							
							doc: add links to ggml examples [no ci] ( #11958 )  
						
						
						
						
							
						
					 
					
						2025-02-19 20:45:17 +01:00 
						 
				 
			
				
					
						
							
							
								Daniel Bevenius 
							
						 
					 
					
						
						
							
						
						d07c621393 
					 
					
						
						
							
							common : add llama.vim preset for Qwen2.5 Coder ( #11945 )  
						
						... 
						
						
						
						This commit adds a preset for llama.vim to use the default Qwen 2.5
Coder models.
The motivation for this change is to make it easier to start a server
suitable to be used with the llama.vim plugin. For example, the server
can be started with a command like the following:
```console
$ llama.vim --fim-qwen-1.5b-default
```
Refs: https://github.com/ggml-org/llama.cpp/issues/10932  
						
						
							
 
						
					 
					
						2025-02-19 12:29:52 +01:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						abd4d0bc4f 
					 
					
						
						
							
							speculative : update default params ( #11954 )  
						
						... 
						
						
						
						* speculative : update default params
* speculative : do not discard the last drafted token 
						
						
							
 
						
					 
					
						2025-02-19 13:29:42 +02:00 
						 
				 
			
				
					
						
							
							
								Daniel Bevenius 
							
						 
					 
					
						
						
							
						
						9626d9351a 
					 
					
						
						
							
							llama : fix indentation in llama-grammar [no ci] ( #11943 )  
						
						... 
						
						
						
						This commit adjusts the indentation for the functions `parse_sequence`
and `parse_rule` in src/llama-grammar.cpp.
The motivation is consistency and improve readability. 
						
						
							
						
					 
					
						2025-02-19 06:16:23 +01:00 
						 
				 
			
				
					
						
							
							
								igardev 
							
						 
					 
					
						
						
							
						
						b58934c183 
					 
					
						
						
							
							server : (webui) Enable communication with parent html (if webui is in iframe) ( #11940 )  
						
						... 
						
						
						
						* Webui: Enable communication with parent html (if webui is in iframe):
- Listens for "setText" command from parent with "text" and "context" fields. "text" is set in inputMsg, "context" is used as hidden context on the following requests to the llama.cpp server
- On pressing na Escape button sends command "escapePressed" to the parent
Example handling from the parent html side:
- Send command "setText" from parent html to webui in iframe:
const iframe = document.getElementById('askAiIframe');
if (iframe) {
	iframe.contentWindow.postMessage({ command: 'setText', text: text, context: context }, '*');
}
- Listen for Escape key from webui on parent html:
// Listen for escape key event in the iframe
window.addEventListener('keydown', (event) => {
	if (event.key === 'Escape') {
		// Process case when Escape is pressed inside webui
	}
});
* Move the extraContext from storage to app.context.
* Fix formatting.
* add Message.extra
* format + build
* MessageExtraContext
* build
* fix display
* rm console.log
---------
Co-authored-by: igardev <ivailo.gardev@akros.ch >
Co-authored-by: Xuan Son Nguyen <son@huggingface.co > 
						
						
							
						
					 
					
						2025-02-18 23:01:44 +01:00 
						 
				 
			
				
					
						
							
							
								Olivier Chafik 
							
						 
					 
					
						
						
							
						
						63e489c025 
					 
					
						
						
							
							tool-call: refactor common chat / tool-call api (+ tests / fixes) ( #11900 )  
						
						... 
						
						
						
						* tool-call refactoring: moved common_chat_* to chat.h, common_chat_templates_init return a unique_ptr to opaque type
* addressed clang-tidy lints in [test-]chat.*
* rm minja deps from util & common & move it to common/minja/
* add name & tool_call_id to common_chat_msg
* add common_chat_tool
* added json <-> tools, msgs conversions to chat.h
* fix double bos/eos jinja avoidance hack (was preventing inner bos/eos tokens)
* fix deepseek r1 slow test (no longer <think> opening w/ new template)
* allow empty tools w/ auto + grammar
* fix & test server grammar & json_schema params w/ & w/o --jinja 
						
						
							
 
						
					 
					
						2025-02-18 18:03:23 +00:00 
						 
				 
			
				
					
						
							
							
								Xuan-Son Nguyen 
							
						 
					 
					
						
						
							
						
						63ac128563 
					 
					
						
						
							
							server : add TEI API format for /rerank endpoint ( #11942 )  
						
						... 
						
						
						
						* server : add TEI API format for /rerank endpoint
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* fix
* also gitignore examples/server/*.gz.hpp
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com > 
						
						
							
 
						
					 
					
						2025-02-18 14:21:41 +01:00 
						 
				 
			
				
					
						
							
							
								MoonRide303 
							
						 
					 
					
						
						
							
						
						5137da7b8c 
					 
					
						
						
							
							scripts: corrected encoding when getting chat template ( #11866 ) ( #11907 )  
						
						... 
						
						
						
						Signed-off-by: MoonRide303 <moonride303@gmail.com > 
						
						
							
						
					 
					
						2025-02-18 10:30:16 +01:00 
						 
				 
			
				
					
						
							
							
								xiaobing318 
							
						 
					 
					
						
						
							
						
						09aaf4f1f5 
					 
					
						
						
							
							docs : Fix duplicated file extension in test command ( #11935 )  
						
						... 
						
						
						
						This commit fixes an issue in the llama.cpp project where the command for testing the llama-server object contained a duplicated file extension. The original command was:
./tests.sh unit/test_chat_completion.py.py -v -x
It has been corrected to:
./tests.sh unit/test_chat_completion.py -v -x
This change ensures that the test script correctly locates and executes the intended test file, preventing test failures due to an incorrect file name. 
						
						
							
						
					 
					
						2025-02-18 10:12:49 +01:00 
						 
				 
			
				
					
						
							
							
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						73e2ed3ce3 
					 
					
						
						
							
							CUDA: use async data loading for FlashAttention ( #11894 )  
						
						... 
						
						
						
						* CUDA: use async data loading for FlashAttention
---------
Co-authored-by: Diego Devesa <slarengh@gmail.com > 
						
						
							
 
						
					 
					
						2025-02-17 14:03:24 +01:00 
						 
				 
			
				
					
						
							
							
								Eve 
							
						 
					 
					
						
						
							
						
						f7b1116af1 
					 
					
						
						
							
							update release requirements ( #11897 )  
						
						
						
						
							
 
						
					 
					
						2025-02-17 12:20:23 +01:00 
						 
				 
			
				
					
						
							
							
								Antoine Viallon 
							
						 
					 
					
						
						
							
						
						c4d29baf32 
					 
					
						
						
							
							server : fix divide-by-zero in metrics reporting ( #11915 )  
						
						
						
						
							
 
						
					 
					
						2025-02-17 11:25:12 +01:00 
						 
				 
			
				
					
						
							
							
								Rémy O 
							
						 
					 
					
						
						
							
						
						2eea03d86a 
					 
					
						
						
							
							vulkan: implement several ops relevant for ggml_opt ( #11769 )  
						
						... 
						
						
						
						* vulkan: support memset_tensor
* vulkan: support GGML_OP_SUM
* vulkan: implement GGML_OP_ARGMAX
* vulkan: implement GGML_OP_SUB
* vulkan: implement GGML_OP_COUNT_EQUAL
* vulkan: implement GGML_OP_OPT_STEP_ADAMW
* vulkan: fix check_results RWKV_WKV6 crash and memory leaks
* vulkan: implement GGML_OP_REPEAT_BACK
* tests: remove invalid test-backend-ops REPEAT_BACK tests
* vulkan: fix COUNT_EQUAL memset using a fillBuffer command 
						
						
							
 
						
					 
					
						2025-02-17 07:55:57 +01:00 
						 
				 
			
				
					
						
							
							
								Xuan-Son Nguyen 
							
						 
					 
					
						
						
							
						
						0f2bbe6564 
					 
					
						
						
							
							server : bump httplib to 0.19.0 ( #11908 )  
						
						
						
						
							
 
						
					 
					
						2025-02-16 17:11:22 +00:00 
						 
				 
			
				
					
						
							
							
								standby24x7 
							
						 
					 
					
						
						
							
						
						fe163d5bf3 
					 
					
						
						
							
							common : Fix a typo in help ( #11899 )  
						
						... 
						
						
						
						This patch fixes a typo in command help.
prefx -> prefix
Signed-off-by: Masanari Iida <standby24x7@gmail.com > 
						
						
							
 
						
					 
					
						2025-02-16 10:51:13 +01:00