diff --git a/backend/README.md b/backend/README.md index c1fe7af..3e13298 100644 --- a/backend/README.md +++ b/backend/README.md @@ -5,7 +5,7 @@ Below will setup the backend including the `go` orchestration layer and a `llama.cpp` inference server on `localhost:8081` and `localhost:8080` for local testing. ### Building `llama.cpp` -In `$REPO/third_party/llama.cpp` run `make` to build. +See documentation for `llama.cpp` for details. ### Running `llama.cpp` #### Getting a `GGUF` format model