mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	 Merrick Christensen
					Merrick Christensen
				
			
				
					committed by
					
						 GitHub
						GitHub
					
				
			
			
				
	
			
			
			 GitHub
						GitHub
					
				
			
						parent
						
							79f34abddb
						
					
				
				
					commit
					f72f8f22c9
				
			| @@ -61,7 +61,7 @@ For example to apply 40% of the 'shakespeare' LORA adapter, 80% of the 'bible' L | ||||
|   --lora lora-open-llama-3b-v2-q8_0-yet-another-one-LATEST.bin | ||||
| ``` | ||||
|  | ||||
| The scale numbers don't need to add up to one, and you can also use numbers creater than 1 to further increase the influence of an adapter. But making the values to big will sometimes result in worse output. Play around to find good values. | ||||
| The scale numbers don't need to add up to one, and you can also use numbers greater than 1 to further increase the influence of an adapter. But making the values to big will sometimes result in worse output. Play around to find good values. | ||||
|  | ||||
| Gradient checkpointing reduces the memory requirements by ~50% but increases the runtime. | ||||
| If you have enough RAM, you can make finetuning a bit faster by disabling checkpointing with `--no-checkpointing`. | ||||
|   | ||||
		Reference in New Issue
	
	Block a user