mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	 10d2af0eaa
			
		
	
	10d2af0eaa
	
	
	
		
			
			* llama/ggml: add LLM training support more compact progress bar llama_save_model_to_file llama_opt_param_filter ggml_graph_dup force_grads refactor ggml_opt, fix test-opt * remove logits_all * refactor CUDA implementation for ACC * reset graph at beginning of opt period
		
			
				
	
	
		
			18 lines
		
	
	
		
			899 B
		
	
	
	
		
			Markdown
		
	
	
	
	
	
			
		
		
	
	
			18 lines
		
	
	
		
			899 B
		
	
	
	
		
			Markdown
		
	
	
	
	
	
| # llama.cpp/examples/training
 | |
| 
 | |
| This directory contains examples related to language model training using llama.cpp/GGML.
 | |
| So far finetuning is technically functional (for FP32 models and limited hardware setups) but the code is very much WIP.
 | |
| Finetuning of Stories 260K and LLaMA 3.2 1b seems to work with 24 GB of memory.
 | |
| **For CPU training, compile llama.cpp without any additional backends such as CUDA.**
 | |
| **For CUDA training, use the maximum number of GPU layers.**
 | |
| 
 | |
| Proof of concept:
 | |
| 
 | |
| ``` sh
 | |
| export model_name=llama_3.2-1b && export quantization=f32
 | |
| ./build/bin/finetune --file wikitext-2-raw/wiki.test.raw -ngl 999 --model models/${model_name}-${quantization}.gguf -c 512 -b 512 -ub 512
 | |
| ./build/bin/perplexity --file wikitext-2-raw/wiki.test.raw -ngl 999 --model finetuned-model.gguf
 | |
| ```
 | |
| 
 | |
| The perplexity value of the finetuned model should be lower after training on the test set for 2 epochs.
 |