Files
llama.cpp/ggml/src
Max Krasnyansky dcca0d3ab8 cpu: introduce chunking for flash attention (#16829)
Factor out the core FA loop into flash_atten_f16_one_chunk and add an outter loop
on top that handles the chunks.
2025-10-30 14:26:05 +02:00
..
2025-09-29 17:43:58 +03:00
2025-08-05 22:10:36 +03:00
2025-08-05 22:10:36 +03:00
2025-09-05 11:34:28 +02:00