Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						59fee24c72 
					 
					
						
						
							
							recurrent : rework graph inputs + add TODOs  
						
						... 
						
						
						
						ggml-ci 
						
						
							
						
					 
					
						2025-06-18 09:29:51 +03:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						faf41199c0 
					 
					
						
						
							
							refactor: Use a common build_recurrent_state method that is cache-agnostic  
						
						... 
						
						
						
						This reduces the code duplication between the different build_rs impls and
also retains a similar signature to the previous build_recurrent_state
method while standardizing on the input-dispatched build_rs implementation.
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:19 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						5046d412ef 
					 
					
						
						
							
							fix: Fix initialization of child states  
						
						... 
						
						
						
						Since initially writing this PR, the logic in the child state types changed
such that using the "init full" signature and keeping the ubatches on the
parent struct no longer worked.
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:19 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						9db44a2a63 
					 
					
						
						
							
							fix: Fix resize vs reserve and skip null tensors in size computation  
						
						... 
						
						
						
						https://github.com/ggml-org/llama.cpp/pull/13979/files#r2149469788 
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
Co-Authored-By: @younesbelkada 
					
						2025-06-17 14:54:19 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						11cd80d5de 
					 
					
						
						
							
							feat: Overhaul build_recurrent_state / build_inp_s_copy to match attention pattern  
						
						... 
						
						
						
						https://github.com/ggml-org/llama.cpp/pull/13979/files#r2141701738 
This is a big overhaul to bring consistency between how inputs and per-
layer components are created for attention layers and recurrent layers. The
main changes are:
- Rename class llm_graph_input_s_copy -> llm_graph_input_rs
- Add a corresponding llm_graph_input_rs_hybrid_recurrent
- Rename build_inp_s_copy -> build_rs_inp_recurrent
- Add a corresponding build_rs_inp_hybrid_recurrent
- Rename build_recurrent_state -> build_rs to match build_attn w/
llm_graph_input_rs android-build AUTHORS bamba-9b-2.2T.gguf bamba-9b-2.2T.q4_k_m.gguf broken.log build build-rel build-xcframework.sh build.android build.android.bak ci cmake CMakeLists.txt CMakePresets.json CODEOWNERS common common.o CONTRIBUTING.md convert_hf_to_gguf_update.py convert_hf_to_gguf.py convert_llama_ggml_to_gguf.py convert_lora_to_gguf.py debug.log docs examples flake.lock flake.nix ggml ggml-alloc.o ggml-backend.o ggml-metal.o ggml-model-BF16.gguf ggml-model-Q4_K_M.gguf ggml-quants.o ggml.o gguf-py grammar-parser.o grammars include LICENSE licenses llama.log llama.o llamacpp_trace.log main.log Makefile media models mypy.ini pocs poetry.lock prompts pyproject.toml pyrightconfig.json q4_k_m_boot.log q8_0_boot.log quant.log quant2.log README.md requirements requirements.txt sampling.o scripts SECURITY.md src test-grammar-output.tmp test-json-schema-input.tmp tests tools vendor working.log as the first input
- Add a corresponding overload of build_rs w/
llm_graph_input_rs_hybrid_recurrent android-build AUTHORS bamba-9b-2.2T.gguf bamba-9b-2.2T.q4_k_m.gguf broken.log build build-rel build-xcframework.sh build.android build.android.bak ci cmake CMakeLists.txt CMakePresets.json CODEOWNERS common common.o CONTRIBUTING.md convert_hf_to_gguf_update.py convert_hf_to_gguf.py convert_llama_ggml_to_gguf.py convert_lora_to_gguf.py debug.log docs examples flake.lock flake.nix ggml ggml-alloc.o ggml-backend.o ggml-metal.o ggml-model-BF16.gguf ggml-model-Q4_K_M.gguf ggml-quants.o ggml.o gguf-py grammar-parser.o grammars include LICENSE licenses llama.log llama.o llamacpp_trace.log main.log Makefile media models mypy.ini pocs poetry.lock prompts pyproject.toml pyrightconfig.json q4_k_m_boot.log q8_0_boot.log quant.log quant2.log README.md requirements requirements.txt sampling.o scripts SECURITY.md src test-grammar-output.tmp test-json-schema-input.tmp tests tools vendor working.log as the first input
- Add a llm_graph_input_attn_kv_hybrid_recurrent analogous to
llm_graph_input_attn_kv_unified
- Add a build_attn override that takes
llm_graph_input_attn_kv_hybrid_recurrent android-build AUTHORS bamba-9b-2.2T.gguf bamba-9b-2.2T.q4_k_m.gguf broken.log build build-rel build-xcframework.sh build.android build.android.bak ci cmake CMakeLists.txt CMakePresets.json CODEOWNERS common common.o CONTRIBUTING.md convert_hf_to_gguf_update.py convert_hf_to_gguf.py convert_llama_ggml_to_gguf.py convert_lora_to_gguf.py debug.log docs examples flake.lock flake.nix ggml ggml-alloc.o ggml-backend.o ggml-metal.o ggml-model-BF16.gguf ggml-model-Q4_K_M.gguf ggml-quants.o ggml.o gguf-py grammar-parser.o grammars include LICENSE licenses llama.log llama.o llamacpp_trace.log main.log Makefile media models mypy.ini pocs poetry.lock prompts pyproject.toml pyrightconfig.json q4_k_m_boot.log q8_0_boot.log quant.log quant2.log README.md requirements requirements.txt sampling.o scripts SECURITY.md src test-grammar-output.tmp test-json-schema-input.tmp tests tools vendor working.log as the first input
This makes the two paradigms fully consistent. The main drawback is the
code duplication in the build_attn and build_rs implementations where the
only difference between implementations is how they cast the memory state.
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
					
						2025-06-17 14:54:19 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						4ec4e6a801 
					 
					
						
						
							
							refactor: Use llama_memory_state_ptr for child states in hybrid memory state  
						
						... 
						
						
						
						Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:19 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						7ba463b38c 
					 
					
						
						
							
							fix: Remove llama_model_is_hybrid_Recurrent public API  
						
						... 
						
						
						
						https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2141728423 
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
					
						2025-06-17 14:54:19 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						1510016ea4 
					 
					
						
						
							
							fix: Remove logits_all after rebase  
						
						... 
						
						
						
						Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:19 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						d8c929ff5d 
					 
					
						
						
							
							feat: Allow custom layer filters for hybrid recurrent  
						
						... 
						
						
						
						This should help support architectures like Falcon H1 where there is
overlap between layers that need attention and recurrent caches.
https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2140748922 
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:19 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						d5d7628b5f 
					 
					
						
						
							
							refactor: Remove n_embd_k/v_gqa from recurrent cache  
						
						... 
						
						
						
						This is no longer needed now that there are separate implementations
https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2140825128 
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:19 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						b42c8b43cf 
					 
					
						
						
							
							refactor: Remove layer index from n_embd_k/v_s  
						
						... 
						
						
						
						Now that it's not used at all in the unified cache, we don't need to use
the layer index to zero it out for attention layers.
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:19 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						1dd12133cd 
					 
					
						
						
							
							refactor: Remove n_embd_k/v_s from unified cache  
						
						... 
						
						
						
						No longer needed now that unified isn't also supporting recurrent
https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2140761069 
Branch: HybridRecurrentCache 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						833dfb54ae 
					 
					
						
						
							
							fix: Use per-layer n_embd_k/v_s calls for mamba (1) layers  
						
						... 
						
						
						
						Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						f6d5f055c6 
					 
					
						
						
							
							fix: Remove errant virtual destructor leftover from previous impl attempt  
						
						... 
						
						
						
						Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						9c1a604af8 
					 
					
						
						
							
							fix: Update clear signature for data argument after rebase  
						
						... 
						
						
						
						Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						de9297fd5e 
					 
					
						
						
							
							fix: Add missing padding to n_ctx for hybrid cache construction  
						
						... 
						
						
						
						Branch: GraniteFour
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						911e694476 
					 
					
						
						
							
							fix: Fix status for init_update sig for recurrent cache state  
						
						... 
						
						
						
						Branch: GraniteFour
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						d3699366e6 
					 
					
						
						
							
							fix: Update recurrent cache for changes to remove intermediate kv_cache interface  
						
						... 
						
						
						
						Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						a9b5fe98ad 
					 
					
						
						
							
							fix: Fix logic for initializing inputs and attn layers for hybrid caches  
						
						... 
						
						
						
						Branch: GraniteFour
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						e3c1631556 
					 
					
						
						
							
							feat: Support hybrid recurrent in llama-graph  
						
						... 
						
						
						
						NOTE: I intentionally did not add support for s_mask since it will be going
away soon
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						cf03d4ae5c 
					 
					
						
						
							
							fix: Fix shift logic to defer to unified cache  
						
						... 
						
						
						
						Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						6c6ec0003a 
					 
					
						
						
							
							fix: Fix wrong bool condition for split equal in hybrid cache  
						
						... 
						
						
						
						Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						423c89401d 
					 
					
						
						
							
							feat: Construct hybrid recurrent cache for hybrid recurrent models  
						
						... 
						
						
						
						This includes a refactor of the create_memory logic to avoid needing to use
the arch enum explicitly unless a model needs explicit cache instantiation
logic beyond the standard logic for recurrent, hybrid, unified, and iswa.
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						c71eaa37a0 
					 
					
						
						
							
							feat: First pass at llama_kv_cache_hybrid_recurrent  
						
						... 
						
						
						
						This follows the pattern in iswa where the two child caches are held
explicitly to support the case where a model requires a single attention
cache and a single recurrent cache where each layer uses exactly one of the
caches.
This is a rewrite of the more generic approach in the original hybrid cache
PR: https://github.com/ggml-org/llama.cpp/pull/13276 
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						13332a7554 
					 
					
						
						
							
							fix: Use per-layer sizing everywhere in kv caches  
						
						... 
						
						
						
						Branch: GraniteFour
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						40e9187892 
					 
					
						
						
							
							feat: Add layer filter to recurrent cache  
						
						... 
						
						
						
						Branch: HybridCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						fb26e95ae7 
					 
					
						
						
							
							refactor: rename *_is_hybrid -> *_is_hybrid_recurrent  
						
						... 
						
						
						
						The implementation of the hybrid cache intentionally does not specify the
types of the child caches, so there was a naming mismatch with these
predicate functions that used "hybrid" to imply "hybrid recurrent."
Branch: HybridCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						fc9e0b576e 
					 
					
						
						
							
							feat: Auto-fill hparams.recurrent_layer_arr based on whether the model is recurrent  
						
						... 
						
						
						
						Branch: GraniteFour
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:18 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						05f1958080 
					 
					
						
						
							
							feat: Add support for distinguishing recurrent vs non-recurrent layers in hparams  
						
						... 
						
						
						
						Branch: GraniteFour
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:17 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						5e2f2c3876 
					 
					
						
						
							
							feat: Add c++ side constants for attention layer indices hparam  
						
						... 
						
						
						
						Branch: GraniteFour 
						
						
							
						
					 
					
						2025-06-17 14:54:17 -06:00 
						 
				 
			
				
					
						
							
							
								Gabe Goodhart 
							
						 
					 
					
						
						
							
						
						ec8fe17b1a 
					 
					
						
						
							
							feat: Add llama_model_is_hybrid API call  
						
						... 
						
						
						
						Also, split llama_model_is_recurrent into llm_arch_is_recurrent in
llama-arch with llama_model_is_recurrent delegating to
llm_arch_is_recurrent. The same split is done for hybird. This is needed
because there are places where the llama_model has not yet been initialized
but we need to check if the model is recurrent (specifically for the
per-layer recurrent check array in hparams).
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com > 
						
						
							
						
					 
					
						2025-06-17 14:54:17 -06:00 
						 
				 
			
				
					
						
							
							
								bandoti 
							
						 
					 
					
						
						
							
						
						c46503014d 
					 
					
						
						
							
							cmake: remove shader-gen step-targets from ggml-vulkan ( #14226 )  
						
						... 
						
						
						
						* Remove step-targets from vulkan-shaders-gen
* Unset DESTDIR when building vulkan-shaders-gen 
						
						
							
 
						
					 
					
						2025-06-17 22:33:25 +02:00 
						 
				 
			
				
					
						
							
							
								xctan 
							
						 
					 
					
						
						
							
						
						860a9e4eef 
					 
					
						
						
							
							ggml-cpu : remove the weak alias trick ( #14221 )  
						
						
						
						
							
 
						
					 
					
						2025-06-17 12:58:32 +03:00 
						 
				 
			
				
					
						
							
							
								R0CKSTAR 
							
						 
					 
					
						
						
							
						
						fe9d60e74a 
					 
					
						
						
							
							musa: fix build warning (unused variable) ( #14231 )  
						
						... 
						
						
						
						Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com > 
						
						
							
 
						
					 
					
						2025-06-17 17:48:08 +08:00 
						 
				 
			
				
					
						
							
							
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						e434e69183 
					 
					
						
						
							
							common : suggest --jinja when autodetection fails ( #14222 )  
						
						
						
						
							
 
						
					 
					
						2025-06-16 21:58:42 +02:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						89fea80d29 
					 
					
						
						
							
							server : fix incorrect usage of llama_get_embeddings() ( #14225 )  
						
						... 
						
						
						
						* server : fix incorrect usage of llama_get_embeddings()
ggml-ci
* cont : fix the fix
ggml-ci 
						
						
							
 
						
					 
					
						2025-06-16 22:33:27 +03:00 
						 
				 
			
				
					
						
							
							
								Diego Devesa 
							
						 
					 
					
						
						
							
						
						6adc3c3ebc 
					 
					
						
						
							
							llama : add thread safety test ( #14035 )  
						
						... 
						
						
						
						* llama : add thread safety test
* llamafile : remove global state
* llama : better LLAMA_SPLIT_MODE_NONE logic
when main_gpu < 0 GPU devices are not used
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com > 
						
						
							
 
						
					 
					
						2025-06-16 08:11:43 -07:00 
						 
				 
			
				
					
						
							
							
								bandoti 
							
						 
					 
					
						
						
							
						
						0dbcabde8c 
					 
					
						
						
							
							cmake: clean up external project logic for vulkan-shaders-gen ( #14179 )  
						
						... 
						
						
						
						* Remove install step for vulkan-shaders-gen
* Add install step to normalize msvc with make
* Regenerate modified shaders at build-time 
						
						
							
 
						
					 
					
						2025-06-16 10:32:13 -03:00 
						 
				 
			
				
					
						
							
							
								Đinh Trọng Huy 
							
						 
					 
					
						
						
							
						
						ad590be98c 
					 
					
						
						
							
							model : add NeoBERT ( #14164 )  
						
						... 
						
						
						
						* convert neobert model to gguf
* add inference graph
* fix flake8 lint
* followed reviewer suggestions
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* follow reviewers suggestions
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* override NeoBERT feed-forward length
---------
Co-authored-by: dinhhuy <huy.dinh@brains-tech.co.jp >
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com > 
						
						
							
 
						
					 
					
						2025-06-16 14:53:41 +02:00 
						 
				 
			
				
					
						
							
							
								uvos 
							
						 
					 
					
						
						
							
						
						7d6d91babf 
					 
					
						
						
							
							HIP: disable rocwmma on gfx12 by default until rocm 7.0 ( #14202 )  
						
						
						
						
							
 
						
					 
					
						2025-06-16 13:47:38 +02:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						d3e64b9f49 
					 
					
						
						
							
							llama : rework embeddings logic ( #14208 )  
						
						... 
						
						
						
						* llama : rework embeddings logic
ggml-ci
* cont : fix rerank
ggml-ci
* cont : engrish [no ci]
* cont : fix rerank
ggml-ci
* server : support both embeddings and completions with single model
ggml-ci
* cont : avoid embeddings_org
ggml-ci 
						
						
							
						
					 
					
						2025-06-16 14:14:00 +03:00 
						 
				 
			
				
					
						
							
							
								Charles Xu 
							
						 
					 
					
						
						
							
						
						3ba0d843c6 
					 
					
						
						
							
							ggml: Add Android support for GGML_CPU_ALL_VARIANTS ( #14206 )  
						
						
						
						
							
 
						
					 
					
						2025-06-16 11:47:57 +02:00 
						 
				 
			
				
					
						
							
							
								Bartowski 
							
						 
					 
					
						
						
							
						
						0bf49eb668 
					 
					
						
						
							
							convert : remove arcee change in convert_hf_to_gguf_update.py ( #14207 )  
						
						
						
						
							
						
					 
					
						2025-06-16 10:16:06 +02:00 
						 
				 
			
				
					
						
							
							
								Đinh Trọng Huy 
							
						 
					 
					
						
						
							
						
						4ad243677b 
					 
					
						
						
							
							gguf-py : allow key override when adding value to GGUFWriter ( #14194 )  
						
						... 
						
						
						
						Co-authored-by: dinhhuy <huy.dinh@brains-tech.co.jp > 
						
						
							
						
					 
					
						2025-06-16 09:20:59 +02:00 
						 
				 
			
				
					
						
							
							
								Jeff Bolz 
							
						 
					 
					
						
						
							
						
						c89c2d1ab9 
					 
					
						
						
							
							vulkan: mutex around vkQueueSubmit ( #14127 )  
						
						... 
						
						
						
						This fixes the remaining crash in test-thread-safety on my system. 
						
						
							
 
						
					 
					
						2025-06-16 08:21:08 +02:00 
						 
				 
			
				
					
						
							
							
								xctan 
							
						 
					 
					
						
						
							
						
						3555b3004b 
					 
					
						
						
							
							ggml-cpu : rework weak alias on apple targets ( #14146 )  
						
						... 
						
						
						
						* ggml-cpu : rework weak alias on apple targets
* fix powerpc detection
* fix ppc detection
* fix powerpc detection on darwin 
						
						
							
 
						
					 
					
						2025-06-16 13:54:15 +08:00 
						 
				 
			
				
					
						
							
							
								Bartowski 
							
						 
					 
					
						
						
							
						
						d7da8dc83a 
					 
					
						
						
							
							model : Add support for Arcee AI's upcoming AFM model ( #14185 )  
						
						... 
						
						
						
						* Add Arcee AFM support
* Add draft update code
* Fix linter and update URL, may still not be final
* Update src/llama-model.cpp
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com >
* Remote accidental blank line
---------
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com > 
						
						
							
 
						
					 
					
						2025-06-16 01:04:06 +02:00 
						 
				 
			
				
					
						
							
							
								Eric Curtin 
							
						 
					 
					
						
						
							
						
						cd355eda7d 
					 
					
						
						
							
							server : When listening on a unix domain socket don't print http:// and port ( #14180 )  
						
						... 
						
						
						
						Instead show something like this:
main: server is listening on file.sock - starting the main loop
Signed-off-by: Eric Curtin <ecurtin@redhat.com > 
						
						
							
 
						
					 
					
						2025-06-15 23:36:22 +02:00 
						 
				 
			
				
					
						
							
							
								Ed Addario 
							
						 
					 
					
						
						
							
						
						30e5b01de2 
					 
					
						
						
							
							quantize : change int to unsigned int for KV overrides ( #14197 )  
						
						
						
						
							
 
						
					 
					
						2025-06-15 18:53:45 +02:00 
						 
				 
			
				
					
						
							
							
								uvos 
							
						 
					 
					
						
						
							
						
						e54b394082 
					 
					
						
						
							
							CUDA/HIP: fix ssm_scan on devices where warp size is not 32 ( #14196 )  
						
						
						
						
							
 
						
					 
					
						2025-06-15 17:30:13 +02:00