mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	remove outdated references to -eps and -gqa from README (#2881)
This commit is contained in:
		| @@ -729,8 +729,6 @@ python3 convert.py pygmalion-7b/ --outtype q4_1 | |||||||
|   - [LLaMA 2 7B chat](https://huggingface.co/TheBloke/Llama-2-7B-chat-GGML) |   - [LLaMA 2 7B chat](https://huggingface.co/TheBloke/Llama-2-7B-chat-GGML) | ||||||
|   - [LLaMA 2 13B chat](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML) |   - [LLaMA 2 13B chat](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML) | ||||||
|   - [LLaMA 2 70B chat](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGML) |   - [LLaMA 2 70B chat](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGML) | ||||||
| - Specify `-eps 1e-5` for best generation quality |  | ||||||
| - Specify `-gqa 8` for 70B models to work |  | ||||||
|  |  | ||||||
| ### Verifying the model files | ### Verifying the model files | ||||||
|  |  | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user
	 slaren
					slaren