Files
llama.cpp/ggml-backend.h
Georgi Gerganov 652c849643 ggml : add is_ram_shared to ggml_backend
Metal can share the RAM memory and can utilize mmap without temp buffer
2023-07-18 18:51:02 +03:00

7.9 KiB