Files
llama.cpp/convert-hf-to-gguf.py
Galunid 36eed0c42c stablelm : StableLM support (#3586)
* Add support for stablelm-3b-4e1t
* Supports GPU offloading of (n-1) layers
2023-11-14 11:17:12 +01:00

38 KiB
Executable File