mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	typo : it is --n-gpu-layers not --gpu-layers (#3592)
				
					
				
			fixed a typo in the MacOS Metal run doco
This commit is contained in:
		| @@ -279,7 +279,7 @@ In order to build llama.cpp you have three different options. | |||||||
| On MacOS, Metal is enabled by default. Using Metal makes the computation run on the GPU. | On MacOS, Metal is enabled by default. Using Metal makes the computation run on the GPU. | ||||||
| To disable the Metal build at compile time use the `LLAMA_NO_METAL=1` flag or the `LLAMA_METAL=OFF` cmake option. | To disable the Metal build at compile time use the `LLAMA_NO_METAL=1` flag or the `LLAMA_METAL=OFF` cmake option. | ||||||
|  |  | ||||||
| When built with Metal support, you can explicitly disable GPU inference with the `--gpu-layers|-ngl 0` command-line | When built with Metal support, you can explicitly disable GPU inference with the `--n-gpu-layers|-ngl 0` command-line | ||||||
| argument. | argument. | ||||||
|  |  | ||||||
| ### MPI Build | ### MPI Build | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user
	 Ian Scrivener
					Ian Scrivener