mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	llama : fix Gemma3 SWA KV cache shift (#12373)
* llama : fix Gemma3 SWA KV cache shift ggml-ci * hparams : add comment [no ci]
This commit is contained in:
		| @@ -168,6 +168,8 @@ private: | ||||
|         ggml_tensor * cur, | ||||
|         ggml_tensor * shift, | ||||
|         ggml_tensor * factors, | ||||
|               float   freq_base, | ||||
|               float   freq_scale, | ||||
|         ggml_backend_buffer * bbuf) const; | ||||
|  | ||||
|     llm_graph_result_ptr build_kv_self_shift( | ||||
|   | ||||
		Reference in New Issue
	
	Block a user
	 Georgi Gerganov
					Georgi Gerganov