mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-11-04 09:32:00 +00:00
Update examples/server/README.md
Co-authored-by: slaren <slarengh@gmail.com>
This commit is contained in:
@@ -54,7 +54,8 @@ To get started right away, run the following command, making sure to use the cor
|
|||||||
### Windows:
|
### Windows:
|
||||||
|
|
||||||
```powershell
|
```powershell
|
||||||
|
server.exe -m models\7B\ggml-model.gguf -c 2048
|
||||||
|
```
|
||||||
The above command will start a server that by default listens on `127.0.0.1:8080`.
|
The above command will start a server that by default listens on `127.0.0.1:8080`.
|
||||||
You can consume the endpoints with Postman or NodeJS with axios library. You can visit the web front end at the same url.
|
You can consume the endpoints with Postman or NodeJS with axios library. You can visit the web front end at the same url.
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user