Files
llama.cpp/ggml-cuda.h
slaren e04dc51988 ggml-cuda : add rope f16, restore performance with parallel decoding (#3272)
* ggml-cuda : add rope f16, restore performance

* offload KQ_mask with all models

* fix rope shift

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-20 14:00:28 +03:00

1.7 KiB