Percy Piper 
							
						 
					 
					
						
						
							
						
						c508256db2 
					 
					
						
						
							
							rpc : Fix build on OpenBSD ( #13541 )  
						
						
						
						
							
 
						
					 
					
						2025-05-25 15:35:53 +03:00 
						 
				 
			
				
					
						
							
							
								Xuan-Son Nguyen 
							
						 
					 
					
						
						
							
						
						40aaa8a403 
					 
					
						
						
							
							mtmd : add support for Qwen2-Audio and SeaLLM-Audio ( #13760 )  
						
						... 
						
						
						
						* mtmd : add Qwen2-Audio support
* small clean up
* update discussion link
* clarify mtmd_get_output_embd
* clarification in multimodal.md
* fix ultravox bug
* ggml_cont 
						
						
							
 
						
					 
					
						2025-05-25 14:06:32 +02:00 
						 
				 
			
				
					
						
							
							
								ddpasa 
							
						 
					 
					
						
						
							
						
						a08c1d2845 
					 
					
						
						
							
							docs : add Moondream2 pre-quantized link ( #13745 )  
						
						... 
						
						
						
						* Multimodal: Added Moondream2 model and fixed ggml.org link
* Apply suggestions from code review
---------
Co-authored-by: name <none@none.com >
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com > 
						
						
							
						
					 
					
						2025-05-25 14:04:49 +02:00 
						 
				 
			
				
					
						
							
							
								Olivier Chafik 
							
						 
					 
					
						
						
							
						
						d785f9c1fd 
					 
					
						
						
							
							server: fix/test add_generation_prompt ( #13770 )  
						
						... 
						
						
						
						Co-authored-by: ochafik <ochafik@google.com > 
						
						
							
 
						
					 
					
						2025-05-25 10:45:49 +01:00 
						 
				 
			
				
					
						
							
							
								Piotr Jasiukajtis 
							
						 
					 
					
						
						
							
						
						4032ca4066 
					 
					
						
						
							
							llama : add support for Qwen3 MoE tied word embeddings ( #13768 )  
						
						
						
						
							
 
						
					 
					
						2025-05-25 10:29:43 +02:00 
						 
				 
			
				
					
						
							
							
								Akarshan Biswas 
							
						 
					 
					
						
						
							
						
						515fdbf7ed 
					 
					
						
						
							
							SYCL: revert "sycl: simplify bin_bcast_kernel ( #13383 )" ( #13752 )  
						
						... 
						
						
						
						Temporarily reverted due to failing fp16 DIV operation
This reverts commit 02cdd2d8b0 
						
						
							
 
						
					 
					
						2025-05-25 10:08:37 +03:00 
						 
				 
			
				
					
						
							
							
								Olivier Chafik 
							
						 
					 
					
						
						
							
						
						f5cd27b71d 
					 
					
						
						
							
							server: streaming of tool calls and thoughts when --jinja is on (#12379 )  
						
						... 
						
						
						
						* add common_json w/ support for truncated json healing
* add common_chat_msg_diff
* partial common_chat_parse
* refactor parser w/ optionals
* server: wire chat diffs in stream mode
* fix trigger of thinking models (must happen after thoughts are closed)
* fix functionary v3.2 raw python!
* rename: common_chat_syntax (now contains format)
* rm common_regex.at_start
* don't return empty <think></think>
* accommodate yet another deepseek r1 distill fantasy syntax (`<|tool▁calls|>`)
* fix QwQ 32B tool call parsing after thoughts (hermes2)
* better logs for grammar triggers
* consume spaces after parse_json_tool_calls
* fix required tool calls w/ thinking models that have pre-opened thinking tags
* fix thinking model's initial trigger + test qwq's template
* run most test_tool_call tests in stream + non-stream modes
* make functionary v3.2 parsing more strict (differentiate first match from others)
* send final diff from server, to close off raw python arguments
* support partial content streaming in Generic mode
* tool-call: allow content prelude before hermes2 tool calls (for Qwen2.5)
* Update function-calling.md
* Update tool_bench.py
* chat-parser: remove input from exception (llm output may contain PII)
---------
Co-authored-by: ochafik <ochafik@google.com >
Co-authored-by: Olivier Chafik <ochafik@users.noreply.github.com > 
						
						
							
 
						
					 
					
						2025-05-25 01:48:08 +01:00 
						 
				 
			
				
					
						
							
							
								Diego Devesa 
							
						 
					 
					
						
						
							
						
						a2d02d5793 
					 
					
						
						
							
							releases : bundle llvm omp library in windows release ( #13763 )  
						
						
						
						
							
 
						
					 
					
						2025-05-25 00:55:16 +02:00 
						 
				 
			
				
					
						
							
							
								Diego Devesa 
							
						 
					 
					
						
						
							
						
						17fc817b58 
					 
					
						
						
							
							releases : enable openmp in windows cpu backend build ( #13756 )  
						
						
						
						
							
 
						
					 
					
						2025-05-24 22:27:03 +02:00 
						 
				 
			
				
					
						
							
							
								Diego Devesa 
							
						 
					 
					
						
						
							
						
						2bd1b30f69 
					 
					
						
						
							
							ggml-cpu : set openmp wait time if not set ( #13758 )  
						
						
						
						
							
 
						
					 
					
						2025-05-24 22:26:47 +02:00 
						 
				 
			
				
					
						
							
							
								0cc4m 
							
						 
					 
					
						
						
							
						
						259469c4b5 
					 
					
						
						
							
							Move GLM4 f32 attention fix to the correct function ( #13750 )  
						
						
						
						
							
 
						
					 
					
						2025-05-24 16:49:12 +02:00 
						 
				 
			
				
					
						
							
							
								Xuan-Son Nguyen 
							
						 
					 
					
						
						
							
						
						4c32832c59 
					 
					
						
						
							
							ggml : add ggml_gelu_erf() CUDA kernel ( #13719 )  
						
						... 
						
						
						
						* ggml : add ggml_gelu_erf() CUDA kernel
* missing semicolon 
						
						
							
 
						
					 
					
						2025-05-24 13:06:47 +02:00 
						 
				 
			
				
					
						
							
							
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						c3a2624339 
					 
					
						
						
							
							vocab : fix ugm tokenizer precision ( #13743 )  
						
						
						
						
							
 
						
					 
					
						2025-05-24 12:29:09 +02:00 
						 
				 
			
				
					
						
							
							
								Johannes Gäßler 
							
						 
					 
					
						
						
							
						
						ffd0eae60b 
					 
					
						
						
							
							CUDA: fix race condition in FA vector kernels ( #13742 )  
						
						
						
						
							
 
						
					 
					
						2025-05-24 11:46:19 +02:00 
						 
				 
			
				
					
						
							
							
								Diego Devesa 
							
						 
					 
					
						
						
							
						
						b775345d78 
					 
					
						
						
							
							ci : enable winget package updates ( #13734 )  
						
						
						
						
							
						
					 
					
						2025-05-23 23:14:00 +03:00 
						 
				 
			
				
					
						
							
							
								Diego Devesa 
							
						 
					 
					
						
						
							
						
						a70a8a69c2 
					 
					
						
						
							
							ci : add winget package updater ( #13732 )  
						
						
						
						
							
						
					 
					
						2025-05-23 22:09:38 +02:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						d13d0f6135 
					 
					
						
						
							
							hparams : initialize arrays ( #13728 )  
						
						... 
						
						
						
						ggml-ci 
						
						
							
 
						
					 
					
						2025-05-23 20:16:13 +03:00 
						 
				 
			
				
					
						
							
							
								Xuan-Son Nguyen 
							
						 
					 
					
						
						
							
						
						8a2afb7520 
					 
					
						
						
							
							llama : allow custom list of swa_layers ( #13726 )  
						
						
						
						
							
						
					 
					
						2025-05-23 17:07:04 +02:00 
						 
				 
			
				
					
						
							
							
								Xuan-Son Nguyen 
							
						 
					 
					
						
						
							
						
						9ecf3e66a3 
					 
					
						
						
							
							server : support audio input ( #13714 )  
						
						... 
						
						
						
						* server : support audio input
* add audio support on webui 
						
						
							
 
						
					 
					
						2025-05-23 11:03:47 +02:00 
						 
				 
			
				
					
						
							
							
								Chenguang Li 
							
						 
					 
					
						
						
							
						
						faaaff5f94 
					 
					
						
						
							
							CANN: Support MUL_MAT_ID for q8_0 and q4_0 ( #13705 )  
						
						... 
						
						
						
						* [CANN]Support MUL_MAT_ID Q8 && Q4
Signed-off-by: noemotiovon <757486878@qq.com >
* codestyle adjustment
Signed-off-by: noemotiovon <757486878@qq.com >
---------
Signed-off-by: noemotiovon <757486878@qq.com > 
						
						
							
 
						
					 
					
						2025-05-23 16:47:53 +08:00 
						 
				 
			
				
					
						
							
							
								Xuan-Son Nguyen 
							
						 
					 
					
						
						
							
						
						e16c4731c7 
					 
					
						
						
							
							ggml : fix the order of ggml_unary_op ( #13718 )  
						
						
						
						
							
 
						
					 
					
						2025-05-23 08:12:48 +02:00 
						 
				 
			
				
					
						
							
							
								Jeff Bolz 
							
						 
					 
					
						
						
							
						
						1dcd01960c 
					 
					
						
						
							
							vulkan: support CPY from any type to itself ( #13695 )  
						
						... 
						
						
						
						Reuse the f16/f32 copy shaders, and just scale the number of elements
according to the type size. 
						
						
							
 
						
					 
					
						2025-05-23 06:45:02 +02:00 
						 
				 
			
				
					
						
							
							
								Jeff Bolz 
							
						 
					 
					
						
						
							
						
						c10ed6cbcc 
					 
					
						
						
							
							vulkan: Disable coopmat/coopmat2/bfloat extensions if glslc doesn't support it ( #13696 )  
						
						
						
						
							
 
						
					 
					
						2025-05-23 06:33:45 +02:00 
						 
				 
			
				
					
						
							
							
								Judd 
							
						 
					 
					
						
						
							
						
						a127ff1780 
					 
					
						
						
							
							use LOG_WARN to replace std::cerr ( #13657 )  
						
						
						
						
							
 
						
					 
					
						2025-05-23 06:33:08 +02:00 
						 
				 
			
				
					
						
							
							
								Diego Devesa 
							
						 
					 
					
						
						
							
						
						3079e9ac8e 
					 
					
						
						
							
							release : fix windows hip release ( #13707 )  
						
						... 
						
						
						
						* release : fix windows hip release
* make single hip release with multiple targets 
						
						
							
 
						
					 
					
						2025-05-23 00:21:37 +02:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						8a1d206f1d 
					 
					
						
						
							
							tts : fix n_ubatch + make WavTokenizer cache-less ( #13713 )  
						
						... 
						
						
						
						ggml-ci 
						
						
							
 
						
					 
					
						2025-05-22 22:21:07 +03:00 
						 
				 
			
				
					
						
							
							
								Xuan-Son Nguyen 
							
						 
					 
					
						
						
							
						
						797990c4bc 
					 
					
						
						
							
							mtmd : add ultravox audio input ( #13623 )  
						
						... 
						
						
						
						* convert ok, load ok
* warmup ok
* test
* still does not work?
* fix padding
* temporary give up
* fix merge conflict
* build_ultravox()
* rm test
* fix merge conflict
* add necessary mtmd APIs
* first working version (only 4s of audio)
* will this monster compile?
* fix compile
* please compile
* fPIC
* fix windows
* various fixes
* clean up audio_helpers
* fix conversion
* add some debug stuff
* long audio input ok
* adapt the api
* add --audio arg
* final touch UX
* add miniaudio to readme
* fix typo
* refactor kv metadata
* mtmd_default_marker() 
						
						
							
 
						
					 
					
						2025-05-22 20:42:48 +02:00 
						 
				 
			
				
					
						
							
							
								Aaron Teo 
							
						 
					 
					
						
						
							
						
						ab86335760 
					 
					
						
						
							
							common: Include torch package for s390x ( #13699 )  
						
						... 
						
						
						
						* common: update requirements.txt to include pytorch nightly for s390x
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
* common: fix torch installation via pip for s390x
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com >
---------
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com > 
						
						
							
						
					 
					
						2025-05-22 21:31:29 +03:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						cc74d5be99 
					 
					
						
						
							
							server : pad small embedding batches ( #13692 )  
						
						... 
						
						
						
						ggml-ci 
						
						
							
 
						
					 
					
						2025-05-22 16:33:39 +03:00 
						 
				 
			
				
					
						
							
							
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						5be24af73d 
					 
					
						
						
							
							gguf-py : correct charsmap parameter typing ( #13701 )  
						
						
						
						
							
						
					 
					
						2025-05-22 14:25:05 +02:00 
						 
				 
			
				
					
						
							
							
								Nicolò Scipione 
							
						 
					 
					
						
						
							
						
						d394a9aedc 
					 
					
						
						
							
							sycl : Remove waits from function calls ( #13702 )  
						
						... 
						
						
						
						* removes the waits in async memcpy functions 
						
						
							
 
						
					 
					
						2025-05-22 12:54:43 +01:00 
						 
				 
			
				
					
						
							
							
								Ewan Crawford 
							
						 
					 
					
						
						
							
						
						6b56a64690 
					 
					
						
						
							
							SYCL: Avoid using with SYCL-Graph for unsupported nodes ( #13587 )  
						
						... 
						
						
						
						Currently on a CUDA backend to SYCL when running
`GGML_SYCL_DISABLE_GRAPH=0 ./bin/test-backend-ops -b SYCL0` there
are two operations that throw an exception from the blocking
waits during queue recording.
* `-o CONCAT` : Use of blocking waits on a queue that's being recorded https://github.com/ggml-org/llama.cpp/blob/master/ggml/src/ggml-sycl/concat.cpp#L185-L187 
* `-o MUL_MAT_ID`: Blocking wait on a recording queue for a copy to host memory https://github.com/ggml-org/llama.cpp/blob/master/ggml/src/ggml-sycl/ggml-sycl.cpp#L3072-L3074 
We've noticed that `ggml-cuda.cu` has the
[check_node_graph_compatibility_and_refresh_copy_ops](39e73ae0d6/ggml/src/ggml-cuda/ggml-cuda.cu (L2458-L2458) 
						
						
							
 
						
					 
					
						2025-05-22 16:24:09 +08:00 
						 
				 
			
				
					
						
							
							
								Henry Linjamäki 
							
						 
					 
					
						
						
							
						
						a4e8912dfd 
					 
					
						
						
							
							opencl: Add support for multiple devices ( #12622 )  
						
						... 
						
						
						
						* opencl: Add support for multiple devices
... but limited to one platform. A platform with a GPU will be preferred.
Additionally:
* Filter out devices that lack capabilities needed by the backend
  implementation (half support, OpenCL 2.0+, etc).
* Make ggml_backend_opencl_reg() thread-safe.
* fixup: fix an error in sync_with_other_backends
... when there is only one OpenCL device available. 
						
						
							
 
						
					 
					
						2025-05-21 16:21:45 -07:00 
						 
				 
			
				
					
						
							
							
								Henry Linjamäki 
							
						 
					 
					
						
						
							
						
						edbf42edfd 
					 
					
						
						
							
							opencl: fix couple crashes ( #12795 )  
						
						... 
						
						
						
						* opencl: fix couple crashes
* fix kernel launches failed on devices which do not support
  non-uniform work-groups. When non-uniform work-groups are not
  supported, set `local_work_size` to NULL (= let driver choose the
  work-group sizes). This patch does not cover everything - just the
  cases tested by test-backend-ops.
* fix sub-buffer creation failed due to `cl_buffer_region::origin` not
  being aligned to `CL_DEVICE_MEM_BASE_ADDR_ALIGN`.
* OpenCL: query non-uniform WG sizes only on OpenCL 3.0+ 
						
						
							
 
						
					 
					
						2025-05-21 13:21:17 -07:00 
						 
				 
			
				
					
						
							
							
								Diego Devesa 
							
						 
					 
					
						
						
							
						
						d643bb2c79 
					 
					
						
						
							
							releases : build CPU backend separately (windows) ( #13642 )  
						
						
						
						
							
 
						
					 
					
						2025-05-21 22:09:57 +02:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						8e186ef0e7 
					 
					
						
						
							
							hparams : support models for which all layers use SWA ( #13682 )  
						
						... 
						
						
						
						ggml-ci 
						
						
							
 
						
					 
					
						2025-05-21 20:00:49 +03:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						5fbfe384d4 
					 
					
						
						
							
							server : improve error reporting ( #13680 )  
						
						
						
						
							
 
						
					 
					
						2025-05-21 19:46:56 +03:00 
						 
				 
			
				
					
						
							
							
								antichristHater 
							
						 
					 
					
						
						
							
						
						c76532e7ba 
					 
					
						
						
							
							convert : add qwen2vl support for unsloth merges ( #13686 )  
						
						
						
						
							
						
					 
					
						2025-05-21 18:40:35 +02:00 
						 
				 
			
				
					
						
							
							
								Sigbjørn Skjæret 
							
						 
					 
					
						
						
							
						
						2aa777d86d 
					 
					
						
						
							
							examples : switch retrieval to llama_encode ( #13685 )  
						
						... 
						
						
						
						* switch retrieval to llama_encode
* enable --no-warmup for retrieval 
						
						
							
 
						
					 
					
						2025-05-21 16:57:38 +02:00 
						 
				 
			
				
					
						
							
							
								Emmanuel Ferdman 
							
						 
					 
					
						
						
							
						
						eb0f5c28d3 
					 
					
						
						
							
							gguf-py : display the invalid gguf type ( #13687 )  
						
						... 
						
						
						
						Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com > 
						
						
							
						
					 
					
						2025-05-21 16:33:54 +02:00 
						 
				 
			
				
					
						
							
							
								Xuan-Son Nguyen 
							
						 
					 
					
						
						
							
						
						cf4cb59e64 
					 
					
						
						
							
							ggml : add ggml_gelu_erf() ( #13667 )  
						
						... 
						
						
						
						* ggml : add ggml_gelu_na (not approximated)
* fix naming order
* rename na --> erf
* apply review suggesions
* revert naming order 
						
						
							
 
						
					 
					
						2025-05-21 16:26:33 +02:00 
						 
				 
			
				
					
						
							
							
								Robin Davidsson 
							
						 
					 
					
						
						
							
						
						0d5c742161 
					 
					
						
						
							
							server : Add the endpoints /api/tags and /api/chat ( #13659 )  
						
						... 
						
						
						
						* Add the endpoints /api/tags and /api/chat
Add the endpoints /api/tags and /api/chat, and improved the model metadata response
* Remove trailing whitespaces
* Removed code that is not needed for copilot to work. 
						
						
							
 
						
					 
					
						2025-05-21 15:15:27 +02:00 
						 
				 
			
				
					
						
							
							
								Dorin-Andrei Geman 
							
						 
					 
					
						
						
							
						
						42158ae2e8 
					 
					
						
						
							
							server : fix first message identification ( #13634 )  
						
						... 
						
						
						
						* server : fix first message identification
When using the OpenAI SDK (https://github.com/openai/openai-node/blob/master/src/lib/ChatCompletionStream.ts#L623-L626 ) we noticed that the expected assistant role is missing in the first streaming message. Fix this by correctly checking for the first message.
Co-authored-by: Piotr Stankiewicz <piotr.stankiewicz@docker.com >
Signed-off-by: Dorin Geman <dorin.geman@docker.com >
* server : Fix checks for first role message for stream=True
Co-authored-by: Piotr Stankiewicz <piotr.stankiewicz@docker.com >
Signed-off-by: Dorin Geman <dorin.geman@docker.com >
---------
Signed-off-by: Dorin Geman <dorin.geman@docker.com >
Co-authored-by: Piotr Stankiewicz <piotr.stankiewicz@docker.com > 
						
						
							
 
						
					 
					
						2025-05-21 15:07:57 +02:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						797f2ac062 
					 
					
						
						
							
							kv-cache : simplify the interface ( #13660 )  
						
						... 
						
						
						
						* kv-cache : simplify the interface
ggml-ci
* context : revert llama_batch_allocr position change
ggml-ci 
						
						
							
 
						
					 
					
						2025-05-21 15:11:13 +03:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						b44890df2e 
					 
					
						
						
							
							model : disable SWA for Phi models ( #13676 )  
						
						... 
						
						
						
						* model : disable SWA for Phi models
ggml-ci
* model : update warning message
* model : print warning only if n_swa > 0
* model : fix typo 
						
						
							
 
						
					 
					
						2025-05-21 13:09:21 +03:00 
						 
				 
			
				
					
						
							
							
								R0CKSTAR 
							
						 
					 
					
						
						
							
						
						33983057d0 
					 
					
						
						
							
							musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy ( #13647 )  
						
						... 
						
						
						
						* musa: fix build warning (unused parameter)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* musa: upgrade MUSA SDK version to rc4.0.1
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* musa: use mudnn::Unary::IDENTITY op to accelerate D2D memory copy
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* Update ggml/src/ggml-cuda/cpy.cu
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
* musa: remove MUDNN_CHECK_GEN and use CUDA_CHECK_GEN instead in MUDNN_CHECK
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
---------
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
Co-authored-by: Johannes Gäßler <johannesg@5d6.de > 
						
						
							
 
						
					 
					
						2025-05-21 09:58:49 +08:00 
						 
				 
			
				
					
						
							
							
								Eve 
							
						 
					 
					
						
						
							
						
						fb1cab201c 
					 
					
						
						
							
							vulkan: fix warnings ( #13626 )  
						
						... 
						
						
						
						* small fixes
* remove ifdef 
						
						
							
 
						
					 
					
						2025-05-20 21:35:16 +00:00 
						 
				 
			
				
					
						
							
							
								l3utterfly 
							
						 
					 
					
						
						
							
						
						b7a17463ec 
					 
					
						
						
							
							mtmd-helper : bug fix to token batching in mtmd ( #13650 )  
						
						... 
						
						
						
						* Update mtmd-helper.cpp
* Update tools/mtmd/mtmd-helper.cpp
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com >
---------
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com > 
						
						
							
 
						
					 
					
						2025-05-20 18:55:30 +02:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						be0239693c 
					 
					
						
						
							
							model : fix llama4 graph ( #13663 )  
						
						... 
						
						
						
						ggml-ci 
						
						
							
 
						
					 
					
						2025-05-20 19:21:04 +03:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						a4090d1174 
					 
					
						
						
							
							llama : remove llama_kv_cache_view API + remove deprecated ( #13653 )  
						
						... 
						
						
						
						ggml-ci 
						
						
							
 
						
					 
					
						2025-05-20 16:13:16 +03:00