llama : use n_swa + n_ubatch cells for SWA cache (#13833)

* llama : use n_swa + n_ubatch cells for SWA cache

ggml-ci

* llama : add warning about multi-sqeuence SWA contexts
This commit is contained in:
Georgi Gerganov
2025-05-31 15:57:44 +03:00
committed by GitHub
parent c7e0a2054b
commit 3600cc2886
6 changed files with 24 additions and 11 deletions

View File

@@ -339,7 +339,7 @@ public:
bool swa_full,
uint32_t kv_size,
uint32_t n_seq_max,
uint32_t n_batch,
uint32_t n_ubatch,
uint32_t n_pad);
~llama_kv_cache_unified_iswa() = default;