mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-10-30 08:42:00 +00:00
kv-cache : drop the "unified" prefix (#15467)
* kv-cache : drop the "unified" prefix ggml-ci * cont : fix comment [no ci]
This commit is contained in:
@@ -12,7 +12,7 @@
|
||||
//
|
||||
|
||||
// TODO: extract the cache state used for graph computation into llama_memory_recurrent_context_i
|
||||
// see the implementation of llama_kv_cache_unified_context_i for an example how to do it
|
||||
// see the implementation of llama_kv_cache_context_i for an example how to do it
|
||||
class llama_memory_recurrent : public llama_memory_i {
|
||||
public:
|
||||
|
||||
|
||||
Reference in New Issue
Block a user