mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	readme : update gpt4all instructions (#980)
This commit is contained in:
		
							
								
								
									
										21
									
								
								README.md
									
									
									
									
									
								
							
							
						
						
									
										21
									
								
								README.md
									
									
									
									
									
								
							| @@ -275,18 +275,19 @@ cadaver, cauliflower, cabbage (vegetable), catalpa (tree) and Cailleach. | ||||
|  | ||||
| ### Using [GPT4All](https://github.com/nomic-ai/gpt4all) | ||||
|  | ||||
| - Obtain the `gpt4all-lora-quantized.bin` model | ||||
| - It is distributed in the old `ggml` format, which is now obsoleted | ||||
| - You have to convert it to the new format using [./convert-gpt4all-to-ggml.py](./convert-gpt4all-to-ggml.py). You may also need to | ||||
| convert the model from the old format to the new format with [./migrate-ggml-2023-03-30-pr613.py](./migrate-ggml-2023-03-30-pr613.py): | ||||
| - Obtain the `tokenizer.model` file from LLaMA model and put it to `models` | ||||
| - Obtain the `added_tokens.json` file from Alpaca model and put it to `models` | ||||
| - Obtain the `gpt4all-lora-quantized.bin` file from GPT4All model and put it to `models/gpt4all-7B` | ||||
| - It is distributed in the old `ggml` format which is now obsoleted | ||||
| - You have to convert it to the new format using `convert.py`: | ||||
|  | ||||
|   ```bash | ||||
|   python3 convert-gpt4all-to-ggml.py models/gpt4all-7B/gpt4all-lora-quantized.bin ./models/tokenizer.model | ||||
|   python3 migrate-ggml-2023-03-30-pr613.py models/gpt4all-7B/gpt4all-lora-quantized.bin models/gpt4all-7B/gpt4all-lora-quantized-new.bin | ||||
|   ``` | ||||
| ```bash | ||||
| python3 convert.py models/gpt4all-7B/gpt4all-lora-quantized.bin | ||||
| ``` | ||||
|  | ||||
| - You can now use the newly generated `gpt4all-lora-quantized-new.bin` model in exactly the same way as all other models | ||||
| - The original model is saved in the same folder with a suffix `.orig` | ||||
| - You can now use the newly generated `models/gpt4all-7B/ggml-model-q4_0.bin` model in exactly the same way as all other models | ||||
|  | ||||
| - The newer GPT4All-J model is not yet supported! | ||||
|  | ||||
| ### Obtaining and verifying the Facebook LLaMA original model and Stanford Alpaca model data | ||||
|  | ||||
|   | ||||
		Reference in New Issue
	
	Block a user
	 Pavol Rusnak
					Pavol Rusnak