mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	server : remove obsolete --memory-f32 option
This commit is contained in:
		| @@ -30,7 +30,6 @@ The project is under active development, and we are [looking for feedback and co | |||||||
| - `-ts SPLIT, --tensor-split SPLIT`: When using multiple GPUs, this option controls how large tensors should be split across all GPUs. `SPLIT` is a comma-separated list of non-negative values that assigns the proportion of data that each GPU should get in order. For example, "3,2" will assign 60% of the data to GPU 0 and 40% to GPU 1. By default, the data is split in proportion to VRAM, but this may not be optimal for performance. | - `-ts SPLIT, --tensor-split SPLIT`: When using multiple GPUs, this option controls how large tensors should be split across all GPUs. `SPLIT` is a comma-separated list of non-negative values that assigns the proportion of data that each GPU should get in order. For example, "3,2" will assign 60% of the data to GPU 0 and 40% to GPU 1. By default, the data is split in proportion to VRAM, but this may not be optimal for performance. | ||||||
| - `-b N`, `--batch-size N`: Set the batch size for prompt processing. Default: `2048` | - `-b N`, `--batch-size N`: Set the batch size for prompt processing. Default: `2048` | ||||||
| - `-ub N`, `--ubatch-size N`: Physical maximum batch size. Default: `512` | - `-ub N`, `--ubatch-size N`: Physical maximum batch size. Default: `512` | ||||||
| - `--memory-f32`: Use 32-bit floats instead of 16-bit floats for memory key+value. Not recommended. |  | ||||||
| - `--mlock`: Lock the model in memory, preventing it from being swapped out when memory-mapped. | - `--mlock`: Lock the model in memory, preventing it from being swapped out when memory-mapped. | ||||||
| - `--no-mmap`: Do not memory-map the model. By default, models are mapped into memory, which allows the system to load only the necessary parts of the model as needed. | - `--no-mmap`: Do not memory-map the model. By default, models are mapped into memory, which allows the system to load only the necessary parts of the model as needed. | ||||||
| - `--numa STRATEGY`: Attempt one of the below optimization strategies that may help on some NUMA systems | - `--numa STRATEGY`: Attempt one of the below optimization strategies that may help on some NUMA systems | ||||||
|   | |||||||
| @@ -2189,8 +2189,6 @@ static void server_print_usage(const char * argv0, const gpt_params & params, co | |||||||
|     printf("                            KV cache defragmentation threshold (default: %.1f, < 0 - disabled)\n", params.defrag_thold); |     printf("                            KV cache defragmentation threshold (default: %.1f, < 0 - disabled)\n", params.defrag_thold); | ||||||
|     printf("  -b N, --batch-size N      logical maximum batch size (default: %d)\n", params.n_batch); |     printf("  -b N, --batch-size N      logical maximum batch size (default: %d)\n", params.n_batch); | ||||||
|     printf("  -ub N, --ubatch-size N    physical maximum batch size (default: %d)\n", params.n_ubatch); |     printf("  -ub N, --ubatch-size N    physical maximum batch size (default: %d)\n", params.n_ubatch); | ||||||
|     printf("  --memory-f32              use f32 instead of f16 for memory key+value (default: disabled)\n"); |  | ||||||
|     printf("                            not recommended: doubles context memory required and no measurable increase in quality\n"); |  | ||||||
|     if (llama_supports_mlock()) { |     if (llama_supports_mlock()) { | ||||||
|         printf("  --mlock                   force system to keep model in RAM rather than swapping or compressing\n"); |         printf("  --mlock                   force system to keep model in RAM rather than swapping or compressing\n"); | ||||||
|     } |     } | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user
	 Georgi Gerganov
					Georgi Gerganov