modified README for backend
This commit is contained in:
@@ -5,7 +5,7 @@ Below will setup the backend including the `go` orchestration layer
|
||||
and a `llama.cpp` inference server on `localhost:8081` and
|
||||
`localhost:8080` for local testing.
|
||||
### Building `llama.cpp`
|
||||
In `$REPO/third_party/llama.cpp` run `make` to build.
|
||||
See documentation for `llama.cpp` for details.
|
||||
|
||||
### Running `llama.cpp`
|
||||
#### Getting a `GGUF` format model
|
||||
|
||||
Reference in New Issue
Block a user