mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-11-02 09:12:03 +00:00
Factor out the core FA loop into flash_atten_f16_one_chunk and add an outter loop on top that handles the chunks.
Factor out the core FA loop into flash_atten_f16_one_chunk and add an outter loop on top that handles the chunks.