Files
llama.cpp/ggml/src/ggml-cuda
Aman Gupta a90eb94ca9 CUDA: fuse rope + set_rows (#16884)
* CUDA: add fused rope

* move k forward_expand up

* create helper function instead of re-using params

* make assert statement more in line with comment

* rope_norm: coalesced writes to global mem
2025-11-13 08:50:01 +08:00
..
2025-08-20 10:17:37 +08:00
2025-08-05 22:10:36 +03:00
2024-11-21 18:18:50 +01:00
2025-06-20 09:50:24 +08:00
2025-06-20 09:50:24 +08:00
2025-08-28 20:33:03 +02:00
2025-08-21 11:06:05 +08:00
2025-10-26 19:28:04 +08:00
2025-06-22 12:39:54 +08:00
2025-10-26 19:28:04 +08:00
2025-07-29 14:45:18 +08:00
2025-07-29 14:45:18 +08:00
2025-11-13 08:50:01 +08:00
2025-11-13 08:50:01 +08:00
2025-07-29 14:22:03 +02:00
2025-07-29 14:22:03 +02:00
2025-08-05 22:10:36 +03:00
2025-03-31 18:05:13 +02:00
2025-03-31 18:05:13 +02:00
2025-06-22 12:39:54 +08:00