mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	Update IPFS links to quantized alpaca with new tokenizer format (#352)
This commit is contained in:
		| @@ -192,17 +192,16 @@ First, download the `ggml` Alpaca model into the `./models` folder: | ||||
|  | ||||
| ``` | ||||
| # use one of these | ||||
| # NOTE: these are copied from the alpaca.cpp repo - not sure how long these will work | ||||
| # TODO: add a script to simplify the download | ||||
| curl -o ggml-alpaca-7b-q4.bin -C - https://gateway.estuary.tech/gw/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC | ||||
| curl -o ggml-alpaca-7b-q4.bin -C - https://ipfs.io/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC | ||||
| curl -o ggml-alpaca-7b-q4.bin -C - https://cloudflare-ipfs.com/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC | ||||
| curl -o ggml2-alpaca-7b-q4.bin -C - https://gateway.estuary.tech/gw/ipfs/QmUp1UGeQFDqJKvtjbSYPBiZZKRjLp8shVP9hT8ZB9Ynv1 | ||||
| curl -o ggml2-alpaca-7b-q4.bin -C - https://ipfs.io/ipfs/QmUp1UGeQFDqJKvtjbSYPBiZZKRjLp8shVP9hT8ZB9Ynv1 | ||||
| curl -o ggml2-alpaca-7b-q4.bin -C - https://cloudflare-ipfs.com/ipfs/QmUp1UGeQFDqJKvtjbSYPBiZZKRjLp8shVP9hT8ZB9Ynv1 | ||||
| ``` | ||||
|  | ||||
| Now run the `main` tool like this: | ||||
|  | ||||
| ``` | ||||
| ./main -m ./models/ggml-alpaca-7b-q4.bin --color -f ./prompts/alpaca.txt -ins | ||||
| ./main -m ./models/ggml2-alpaca-7b-q4.bin --color -f ./prompts/alpaca.txt -ins | ||||
| ``` | ||||
|  | ||||
| Sample run: | ||||
|   | ||||
		Reference in New Issue
	
	Block a user
	 Kevin Kwok
					Kevin Kwok