compilade 
							
						 
					 
					
						
						
							
						
						e54d41befc 
					 
					
						
						
							
							gguf-py : add Numpy MXFP4 de/quantization support ( #15111 )  
						
						 
						
						... 
						
						
						
						* gguf-py : add MXFP4 de/quantization support
* ggml-quants : handle zero amax for MXFP4 
						
						
							
  b6121
 
						
					 
					
						2025-08-08 17:48:26 -04:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						4850b52aed 
					 
					
						
						
							
							server-bench: external OAI servers, sqlite ( #15179 )  
						
						 
						
						... 
						
						
						
						* server-bench: external OAI servers, sqlite
* Update scripts/server-bench.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update scripts/server-bench.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update scripts/server-bench.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* raise_for_status
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com > 
						
						
							
						
					 
					
						2025-08-08 23:04:36 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								AN Long 
							
						 
					 
					
						
						
							
						
						cd6983d56d 
					 
					
						
						
							
							ggml : fix field name when new ggml_backend ( #14944 )  
						
						 
						
						
						
						
							
  b6119
 
						
					 
					
						2025-08-08 14:37:22 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Olivier Chafik 
							
						 
					 
					
						
						
							
						
						6c7e9a5440 
					 
					
						
						
							
							vendor: sync minja ( #15161 )  
						
						 
						
						... 
						
						
						
						* vendor: sync minja
* Update minja.hpp
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com > 
						
						
							
  b6118
 
						
					 
					
						2025-08-08 10:45:18 +01:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						1425f587a8 
					 
					
						
						
							
							CUDA: attention sinks for mma FlashAttention ( #15157 )  
						
						 
						
						
						
						
							
  b6117
 
						
					 
					
						2025-08-08 08:19:58 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								lhez 
							
						 
					 
					
						
						
							
						
						aaa3d07ae7 
					 
					
						
						
							
							opencl: support sink in soft_max (attn sinks) ( #15152 )  
						
						 
						
						
						
						
							
  b6116
 
						
					 
					
						2025-08-07 21:47:03 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Xuan-Son Nguyen 
							
						 
					 
					
						
						
							
						
						50aa938901 
					 
					
						
						
							
							convert : support non-mxfp4 HF model ( #15153 )  
						
						 
						
						... 
						
						
						
						* convert : support non-mxfp4 HF model
* rm redundant check
* disable debug check 
						
						
							
  b6115
 
						
					 
					
						2025-08-07 23:26:03 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Jeff Bolz 
							
						 
					 
					
						
						
							
						
						c4f53563df 
					 
					
						
						
							
							vulkan: support fattn sinks ( #15126 )  
						
						 
						
						
						
						
							
  b6114
 
						
					 
					
						2025-08-07 22:44:20 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Jeff Bolz 
							
						 
					 
					
						
						
							
						
						a0552c8bee 
					 
					
						
						
							
							vulkan: Add env var to disable host visible vidmem ( #15109 )  
						
						 
						
						
						
						
							
  b6113
 
						
					 
					
						2025-08-07 22:07:11 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								RunningLeon 
							
						 
					 
					
						
						
							
						
						99acbc9921 
					 
					
						
						
							
							llama : Support intern-s1 ( #14875 )  
						
						 
						
						... 
						
						
						
						* support internvl
* support interns1
* resolve comments
* put interns1 in tensor mapping
* resolve comment
* move tokenizer changes to sub class 
						
						
							
						
					 
					
						2025-08-07 18:20:40 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								uvos 
							
						 
					 
					
						
						
							
						
						7ad67ba9fe 
					 
					
						
						
							
							HIP: add cmake option to enable compiler output of kernel resource usage metrics ( #15103 )  
						
						 
						
						
						
						
							
  b6111
 
						
					 
					
						2025-08-07 16:44:14 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Christian Kastner 
							
						 
					 
					
						
						
							
						
						9a96389544 
					 
					
						
						
							
							ggml: Skip backend library linking code when GGML_BACKEND_DL=ON ( #15094 )  
						
						 
						
						... 
						
						
						
						Any available libraries are found and loaded dynamically at runtime. 
						
						
							
						
					 
					
						2025-08-07 13:45:41 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						1d72c84188 
					 
					
						
						
							
							CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 ( #15131 )  
						
						 
						
						... 
						
						
						
						* CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 
						
						
							
  b6109
 
						
					 
					
						2025-08-07 10:53:21 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						20638e4f16 
					 
					
						
						
							
							scripts: fix crash when --tool is not set ( #15133 )  
						
						 
						
						
						
						
							
						
					 
					
						2025-08-07 08:50:30 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Daniel Bevenius 
							
						 
					 
					
						
						
							
						
						36d3f00e14 
					 
					
						
						
							
							requirements : fix PyTorch uint64 compatibility ( #15134 )  
						
						 
						
						... 
						
						
						
						This commit addresses an issue with the convert_hf_to_gguf script
which is currently failing with:
```console
AttributeError: module 'torch' has no attribute 'uint64'
```
This occurred because safetensors expects torch.uint64 to be available
in the public API, but PyTorch 2.2.x only provides limited support for
unsigned types beyond uint8 it seems. The torch.uint64 dtype exists but
is not exposed in the standard torch namespace
(see pytorch/pytorch#58734 ).
PyTorch 2.4.0 properly exposes torch.uint64 in the public API, resolving
the compatibility issue with safetensors. This also required torchvision
to updated to =0.19.0 for compatibility.
Refs: https://huggingface.co/spaces/ggml-org/gguf-my-repo/discussions/186#68938de803e47d990aa087fb 
Refs: https://github.com/pytorch/pytorch/issues/58734  
						
						
							
						
					 
					
						2025-08-07 05:31:48 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Reese Levine 
							
						 
					 
					
						
						
							
						
						5fd160bbd9 
					 
					
						
						
							
							ggml: Add basic SET_ROWS support in WebGPU ( #15137 )  
						
						 
						
						... 
						
						
						
						* Begin work on set_rows
* Work on set rows
* Add error buffers for reporting unsupported SET_ROWS indices
* Remove extra comments 
						
						
							
  b6106
 
						
					 
					
						2025-08-06 15:14:40 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								rmatif 
							
						 
					 
					
						
						
							
						
						756cfea826 
					 
					
						
						
							
							fix profiling crash ( #15072 )  
						
						 
						
						
						
						
							
  b6105
 
						
					 
					
						2025-08-06 14:17:51 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								lhez 
							
						 
					 
					
						
						
							
						
						e725a1a982 
					 
					
						
						
							
							opencl: add swiglu_oai and  add_id ( #15121 )  
						
						 
						
						... 
						
						
						
						* opencl: add `swiglu-oai`
* opencl: add `add_id`
* opencl: add missing `add_id.cl` 
						
						
							
  b6104
 
						
					 
					
						2025-08-06 12:12:17 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sachin Desai 
							
						 
					 
					
						
						
							
						
						3db4da56a5 
					 
					
						
						
							
							chat : support Granite model reasoning and tool call ( #14864 )  
						
						 
						
						
						
						
							
  b6103
 
						
					 
					
						2025-08-06 20:27:30 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Juk Armstrong 
							
						 
					 
					
						
						
							
						
						476aa3fd57 
					 
					
						
						
							
							Fixed name -override-tensors to -override-tensor ( #15129 )  
						
						 
						
						
						
						
							
  b6102
 
						
					 
					
						2025-08-06 17:28:48 +01:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Diego Devesa 
							
						 
					 
					
						
						
							
						
						0d8831543c 
					 
					
						
						
							
							ggml : fix fallback to CPU for ununsupported ops ( #15118 )  
						
						 
						
						
						
						
							
  b6101
 
						
					 
					
						2025-08-06 14:37:35 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						65c797c4fa 
					 
					
						
						
							
							chat : fix yandex chat template ( #15116 )  
						
						 
						
						
						
						
							
  b6100
 
						
					 
					
						2025-08-06 13:26:49 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								stevenkuang 
							
						 
					 
					
						
						
							
						
						25726898e8 
					 
					
						
						
							
							chat : fix hunyuan auto-detection ( #15114 )  
						
						 
						
						... 
						
						
						
						Signed-off-by: stevenkuang <stevenkuang@tencent.com > 
						
						
							
  b6099
 
						
					 
					
						2025-08-06 11:48:30 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Chenguang Li 
							
						 
					 
					
						
						
							
						
						2241453252 
					 
					
						
						
							
							CANN: add support for ACL Graph ( #15065 )  
						
						 
						
						... 
						
						
						
						* feat(cann): add optional support for ACL Graph execution
This commit adds support for executing ggml computational graphs using
Huawei's ACL graph mode via the USE_CANN_GRAPH flag. The support can be
enabled at compile time using the CMake option:
    -DUSE_CANN_GRAPH=ON
By default, ACL graph execution is **disabled**, and the fallback path
uses node-by-node execution.
Key additions:
- CMake option  to toggle graph mode
- Graph capture and execution logic using
- Tensor property matching to determine whether graph update is required
- Safe fallback and logging if the environment variable LLAMA_SET_ROWS
  is unset or invalid
This prepares the backend for performance improvements in repetitive graph
execution scenarios on Ascend devices.
Signed-off-by: noemotiovon <757486878@qq.com >
* Fix review comments
Signed-off-by: noemotiovon <757486878@qq.com >
* remane USE_CANN_GRAPH to USE_ACL_GRAPH
Signed-off-by: noemotiovon <757486878@qq.com >
* fix typo
Signed-off-by: noemotiovon <757486878@qq.com >
---------
Signed-off-by: noemotiovon <757486878@qq.com > 
						
						
							
  b6098
 
						
					 
					
						2025-08-06 14:12:42 +08:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Reese Levine 
							
						 
					 
					
						
						
							
						
						9515c6131a 
					 
					
						
						
							
							ggml: WebGPU disable SET_ROWS for now ( #15078 )  
						
						 
						
						... 
						
						
						
						* Add paramater buffer pool, batching of submissions, refactor command building/submission
* Add header for linux builds
* Free staged parameter buffers at once
* Format with clang-format
* Fix thread-safe implementation
* Use device implicit synchronization
* Update workflow to use custom release
* Remove testing branch workflow
* Disable set_rows until it's implemented
* Fix potential issue around empty queue submission
* Try synchronous submission
* Try waiting on all futures explicitly
* Add debug
* Add more debug messages
* Work on getting ssh access for debugging
* Debug on failure
* Disable other tests
* Remove extra if
* Try more locking
* maybe passes?
* test
* Some cleanups
* Restore build file
* Remove extra testing branch ci 
						
						
							
  b6097
 
						
					 
					
						2025-08-05 16:26:38 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						fd1234cb46 
					 
					
						
						
							
							llama : add gpt-oss ( #15091 )  
						
						 
						
						... 
						
						
						
						* oai moe
* compat with new checkpoint
* add attn sink impl
* add rope scaling yarn
* logits match with latest transformers code
* wip chat template
* rm trailing space
* use ggml_scale_bias
* rm redundant is_swa_all
* convert interleaved gate_up
* graph : fix activation function to match reference (#7 )
* vocab : handle o200k_harmony special tokens
* ggml : add attention sinks support (#1 )
* llama : add attn sinks
* ggml : add attn sinks
* cuda : add attn sinks
* vulkan : add support for sinks in softmax
remove unnecessary return
* ggml : add fused swiglu_oai op (#11 )
* ggml : add fused swiglu_oai op
* Update ggml/src/ggml-cpu/ops.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* update CUDA impl
* cont : metal impl
* add vulkan impl
* test-backend-ops : more test cases, clean up
* llama : remove unfused impl
* remove extra lines
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
---------
Co-authored-by: slaren <slarengh@gmail.com >
* repack mxfp4 upon conversion
* clean up a bit
* enable thinking
* add quick hack to render only some special tokens
* fix bf16 conversion
* remove vocab hack
* webui ok
* support chat parsing for gpt-oss
* fix webui
* direct mapping mxfp4, FINALLY
* force using mxfp4
* properly use lazy tensor
* ggml : add mxfp4
ggml : use e8m0 conversion instead of powf
Co-authored-by: Diego Devesa <slarengh@gmail.com >
change kvalues_mxfp4 table to match e2m1 (#6 )
metal : remove quantization for now (not used)
cuda : fix disabled CUDA graphs due to ffn moe bias
vulkan : add support for mxfp4
cont : add cm2 dequant
* ggml : add ggml_add_id (#13 )
* ggml : add ggml_add_id
* add cuda impl
* llama : add weight support check for add_id
* perf opt
* add vulkan impl
* rename cuda files
* add metal impl
* allow in-place ggml_add_id
* llama : keep biases on CPU with --cpu-moe
* llama : fix compile error
ggml-ci
* cuda : add fallback for __nv_cvt_e8m0_to_bf16raw
ggml-ci
* cleanup
ggml-ci
* sycl : fix supports_op for MXFP4
ggml-ci
* fix Unknown reasoning format
* ggml-cpu : fix AVX build
ggml-ci
* fix hip build
ggml-ci
* cuda : add mxfp4 dequantization support for cuBLAS
ggml-ci
* ggml-cpu : fix mxfp4 fallback definitions for some architectures
ggml-ci
* cuda : fix version required for __nv_cvt_e8m0_to_bf16raw
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co >
Co-authored-by: slaren <slarengh@gmail.com > 
						
						
							
  b6096
 
						
					 
					
						2025-08-05 22:10:36 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						f324a3b715 
					 
					
						
						
							
							chat : only remove double bos/eos if added ( #15086 )  
						
						 
						
						... 
						
						
						
						* only remove double bos/eos if added
* fix tests 
						
						
							
  b6095
 
						
					 
					
						2025-08-05 20:43:36 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						be42642581 
					 
					
						
						
							
							readme : update hot topics ( #15097 )  
						
						 
						
						
						
						
							
						
					 
					
						2025-08-05 20:19:33 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Romain Biessy 
							
						 
					 
					
						
						
							
						
						3306ceabf0 
					 
					
						
						
							
							sycl: fix mul_mat selection ( #15092 )  
						
						 
						
						
						
						
							
  b6093
 
						
					 
					
						2025-08-05 18:39:55 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Juk Armstrong 
							
						 
					 
					
						
						
							
						
						c81de6e107 
					 
					
						
						
							
							Fix glm4moe bug ( #15088 )  
						
						 
						
						
						
						
							
  b6092
 
						
					 
					
						2025-08-05 13:56:44 +01:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Alex Wu 
							
						 
					 
					
						
						
							
						
						22f060c9c4 
					 
					
						
						
							
							webui: fix markdown table ( #15081 )  
						
						 
						
						... 
						
						
						
						* webui: fix markdown table
* webui: fix table display with themes 
						
						
							
						
					 
					
						2025-08-05 13:56:44 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								compilade 
							
						 
					 
					
						
						
							
						
						ee3a9fcf88 
					 
					
						
						
							
							context : fix index overflow on huge outputs ( #15080 )  
						
						 
						
						... 
						
						
						
						* context : fix overflow when re-ordering huge outputs
* context : fix logits size overflow for huge batches 
						
						
							
  b6090
 
						
					 
					
						2025-08-05 11:27:45 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Diego Devesa 
							
						 
					 
					
						
						
							
						
						ec428b02c3 
					 
					
						
						
							
							llama : add --n-cpu-moe option ( #15077 )  
						
						 
						
						... 
						
						
						
						* llama : add --n-cpu-moe option
Keeps the MoE weights of the first N layers in the CPU 
						
						
							
  b6089
 
						
					 
					
						2025-08-05 01:05:36 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								compilade 
							
						 
					 
					
						
						
							
						
						19f68fa5a4 
					 
					
						
						
							
							imatrix : warn when GGUF imatrix is saved without .gguf suffix ( #15076 )  
						
						 
						
						... 
						
						
						
						* imatrix : add warning when suffix is not .gguf for GGUF imatrix
* imatrix : only warn about suffix when output format is unspecified 
						
						
							
  b6088
 
						
					 
					
						2025-08-04 23:26:52 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Christian Kastner 
							
						 
					 
					
						
						
							
						
						41613437ff 
					 
					
						
						
							
							cmake: Add GGML_BACKEND_DIR option ( #15074 )  
						
						 
						
						... 
						
						
						
						* cmake: Add GGML_BACKEND_DIR option
This can be used by distributions to specify where to look for backends
when ggml is built with GGML_BACKEND_DL=ON.
* Fix phrasing 
						
						
							
  b6087
 
						
					 
					
						2025-08-04 21:29:14 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						e5bebe5251 
					 
					
						
						
							
							gguf-py : add --chat-template-file to gguf_new_metadata ( #15075 )  
						
						 
						
						
						
						
							
						
					 
					
						2025-08-04 21:01:48 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sam 
							
						 
					 
					
						
						
							
						
						ef0144c087 
					 
					
						
						
							
							model: support GLM 4.5 family of models ( #14939 )  
						
						 
						
						... 
						
						
						
						* model: Add GLM 4.5 (#14921 )
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Merge in PR suggestions
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* model: Add GLM 4.5 family of models (#14921 )
1. Updated tensor_mapping.py with NextN tensor mappings
- Added proper tensor mappings for all NextN/MTP tensors in /Users/samm/git/llama.cpp/gguf-py/gguf/tensor_mapping.py
- Added mappings for: eh_proj, embed_tokens, enorm, hnorm, shared_head.head, shared_head.norm
2. Added num_nextn_predict_layers configuration
- Added LLM_KV_NUM_NEXTN_PREDICT_LAYERS constant to llama-arch.h and llama-arch.cpp
- Added num_nextn_predict_layers field to llama_hparams struct
- Updated GLM4_MOE parameter loading in llama-model.cpp to read this parameter
- Modified tensor loading logic to conditionally load NextN tensors based on num_nextn_predict_layers
- Added GGUF writer support in gguf_writer.py with add_num_nextn_predict_layers() method
- Updated conversion script to extract and write this parameter from HuggingFace config
3. Added FIM tokens for GLM4_MOE
- Added GLM-4.5's FIM tokens to llama-vocab.cpp:
  - <|code_prefix|> for FIM_PRE
  - <|code_suffix|> for FIM_SUF
  - <|code_middle|> for FIM_MID
4. Removed manual NextN tensor handling
- Removed the special-case handling in convert_hf_to_gguf.py that manually mapped NextN tensors
- NextN tensors are now handled automatically through the proper tensor mapping system
* glm 4.5 update tensors names
* model: glm 4.5 apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* model: glm 4.5 apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* model: glm 4.5 apply suggestions from code review
* Apply suggestions from code review
* patch broken chat template
* typings fix
* add TENSOR_SKIP flag
Co-authored-by: Diego Devesa <slarengh@gmail.com >
* Update src/llama-model-loader.h
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
Co-authored-by: Diego Devesa <slarengh@gmail.com > 
						
						
							
  b6085
 
						
					 
					
						2025-08-04 20:29:25 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						2721257e3e 
					 
					
						
						
							
							quantize : fix confusing error message if ftype is invalid ( #15071 )  
						
						 
						
						
						
						
							
  b6084
 
						
					 
					
						2025-08-04 18:11:02 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Reese Levine 
							
						 
					 
					
						
						
							
						
						587d0118f5 
					 
					
						
						
							
							ggml: WebGPU backend host improvements and style fixing ( #14978 )  
						
						 
						
						... 
						
						
						
						* Add parameter buffer pool, batching of submissions, refactor command building/submission
* Add header for linux builds
* Free staged parameter buffers at once
* Format with clang-format
* Fix thread-safe implementation
* Use device implicit synchronization
* Update workflow to use custom release
* Remove testing branch workflow 
						
						
							
  b6083
 
						
					 
					
						2025-08-04 08:52:43 -07:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Jeff Bolz 
							
						 
					 
					
						
						
							
						
						5aa1105da2 
					 
					
						
						
							
							vulkan: fix build when using glslang that does not support coopmat2 ( #15062 )  
						
						 
						
						
						
						
							
  b6082
 
						
					 
					
						2025-08-04 07:09:19 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								compilade 
							
						 
					 
					
						
						
							
						
						d31192b4ee 
					 
					
						
						
							
							imatrix : use GGUF by default ( #14842 )  
						
						 
						
						... 
						
						
						
						* imatrix : use GGUF by default
* imatrix : use GGUF regardless of the output filename
The legacy format can only be produced with --output-format dat 
						
						
							
  b6081
 
						
					 
					
						2025-08-03 22:00:05 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								compilade 
							
						 
					 
					
						
						
							
						
						0a2f5496be 
					 
					
						
						
							
							imatrix : fix 3d activation handling for hybrid and recurrent models ( #14994 )  
						
						 
						
						... 
						
						
						
						* imatrix : use a single count for dense 3d tensors
* imatrix : fix 3d activations when model tensor is 2d
* imatrix : fix 3d tensor counts 
						
						
							
  b6080
 
						
					 
					
						2025-08-03 21:49:13 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								compilade 
							
						 
					 
					
						
						
							
						
						11a3811164 
					 
					
						
						
							
							memory : handle kv_unified for hybrid models ( #15050 )  
						
						 
						
						
						
						
							
  b6079
 
						
					 
					
						2025-08-03 21:43:07 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Csaba Kecskemeti 
							
						 
					 
					
						
						
							
						
						97366dc6ab 
					 
					
						
						
							
							vocab : JetBrains Mellum pre-tokenizer ( #15045 )  
						
						 
						
						
						
						
							
  b6078
 
						
					 
					
						2025-08-03 21:38:18 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Gabriel Larson 
							
						 
					 
					
						
						
							
						
						83bc2f288c 
					 
					
						
						
							
							model : add text-only support for Kimi-VL (and find special tokens in text_config)  ( #15051 )  
						
						 
						
						... 
						
						
						
						* basic kimi-vl textmodel conversion
* check config["text_config"] for special tokens 
						
						
							
						
					 
					
						2025-08-03 16:56:25 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Jeff Bolz 
							
						 
					 
					
						
						
							
						
						6c7a441161 
					 
					
						
						
							
							vulkan: Use coopmat2 for conv2d ( #14982 )  
						
						 
						
						
						
						
							
  b6076
 
						
					 
					
						2025-08-03 14:23:57 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								lhez 
							
						 
					 
					
						
						
							
						
						5c0eb5ef54 
					 
					
						
						
							
							opencl: fix adreno compiler detection logic ( #15029 )  
						
						 
						
						
						
						
							
  b6075
 
						
					 
					
						2025-08-02 19:51:18 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						03d4698218 
					 
					
						
						
							
							CUDA: use mma FA kernel for gqa > 4 on RTX 4000 ( #15035 )  
						
						 
						
						
						
						
							
  b6074
 
						
					 
					
						2025-08-02 16:37:08 +02:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								leejet 
							
						 
					 
					
						
						
							
						
						3303c19b16 
					 
					
						
						
							
							cuda: make im2col a little faster ( #15025 )  
						
						 
						
						
						
						
							
  b6073
 
						
					 
					
						2025-08-02 17:15:36 +03:00  
					
					
						 
						
						
							
							
							
							
							
							 
						
					 
				 
			
				
					
						
							
							
								 
								Daniel Bevenius 
							
						 
					 
					
						
						
							
						
						4fdea540bd 
					 
					
						
						
							
							kv-cache : skip alignment of n_stream in kv-cache log msg [no ci] ( #15040 )  
						
						 
						
						... 
						
						
						
						This commit removes the right alignment the `n_stream` value in the
log message in the `llama_kv_cache_unified` constructor.
The motivation for this change is to enhance the readability of log
message. Currently the output looks like this:
```console
llama_kv_cache_unified: size = 2048.00 MiB (  4096 cells,  32 layers,  1/ 1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
```
Notice that the `n_stream` value is right aligned, which makes it a
little harder to read.
With the change in this commit the output will look like
```console
llama_kv_cache_unified: size = 2048.00 MiB (  4096 cells,  32 layers, 1/1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
``` 
						
						
							
						
					 
					
						2025-08-02 17:14:57 +03:00