mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	readme : server compile flag (#1874)
Explicitly include the server make instructions for C++ noobsl like me ;)
This commit is contained in:
		| @@ -16,6 +16,10 @@ This example allow you to have a llama.cpp http server to interact from a web pa | |||||||
| To get started right away, run the following command, making sure to use the correct path for the model you have: | To get started right away, run the following command, making sure to use the correct path for the model you have: | ||||||
|  |  | ||||||
| #### Unix-based systems (Linux, macOS, etc.): | #### Unix-based systems (Linux, macOS, etc.): | ||||||
|  | Make sure to build with the server option on | ||||||
|  | ```bash | ||||||
|  | LLAMA_BUILD_SERVER=1 make | ||||||
|  | ``` | ||||||
|  |  | ||||||
| ```bash | ```bash | ||||||
| ./server -m models/7B/ggml-model.bin --ctx_size 2048 | ./server -m models/7B/ggml-model.bin --ctx_size 2048 | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user
	 Srinivas Billa
					Srinivas Billa