mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	finetune: fix typo in README.md (#4733)
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
This commit is contained in:
		| @@ -61,7 +61,7 @@ For example to apply 40% of the 'shakespeare' LORA adapter, 80% of the 'bible' L | ||||
|   --lora lora-open-llama-3b-v2-q8_0-yet-another-one-LATEST.bin | ||||
| ``` | ||||
|  | ||||
| The scale numbers don't need to add up to one, and you can also use numbers greater than 1 to further increase the influence of an adapter. But making the values to big will sometimes result in worse output. Play around to find good values. | ||||
| The scale numbers don't need to add up to one, and you can also use numbers greater than 1 to further increase the influence of an adapter. But making the values too big will sometimes result in worse output. Play around to find good values. | ||||
|  | ||||
| Gradient checkpointing reduces the memory requirements by ~50% but increases the runtime. | ||||
| If you have enough RAM, you can make finetuning a bit faster by disabling checkpointing with `--no-checkpointing`. | ||||
|   | ||||
		Reference in New Issue
	
	Block a user
	 Daniel Bevenius
					Daniel Bevenius