mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-28 08:31:25 +00:00 
			
		
		
		
	 a17a2683d8
			
		
	
	a17a2683d8
	
	
	
		
			
			The original file name, `ggml-alpaca-7b-q4.bin`, implied the first-generation GGML. After the breaking changes (mentioned in https://github.com/ggerganov/llama.cpp/issues/382), `llama.cpp` requires GGML V3 now. Those model files are named `*ggmlv3*.bin`. We should change the example to an actually working model file, so that this thing is more likely to run out-of-the-box for more people, and less people would waste time downloading the old Alpaca model.
		
			
				
	
	
		
			20 lines
		
	
	
		
			336 B
		
	
	
	
		
			Bash
		
	
	
		
			Executable File
		
	
	
	
	
			
		
		
	
	
			20 lines
		
	
	
		
			336 B
		
	
	
	
		
			Bash
		
	
	
		
			Executable File
		
	
	
	
	
| #!/bin/bash
 | |
| 
 | |
| #
 | |
| # Temporary script - will be removed in the future
 | |
| #
 | |
| 
 | |
| cd `dirname $0`
 | |
| cd ..
 | |
| 
 | |
| ./main -m ./models/alpaca.13b.ggmlv3.q8_0.bin \
 | |
|        --color \
 | |
|        -f ./prompts/alpaca.txt \
 | |
|        --ctx_size 2048 \
 | |
|        -n -1 \
 | |
|        -ins -b 256 \
 | |
|        --top_k 10000 \
 | |
|        --temp 0.2 \
 | |
|        --repeat_penalty 1.1 \
 | |
|        -t 7
 |