mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	readme : add warning about Q4_2 and Q4_3
This commit is contained in:
		| @@ -7,6 +7,10 @@ | |||||||
|  |  | ||||||
| Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ | Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ | ||||||
|  |  | ||||||
|  | **Warnings** | ||||||
|  |  | ||||||
|  | - `Q4_2` and `Q4_3` are still in development. Do not expect any kind of backward compatibility until they are finalize | ||||||
|  |  | ||||||
| **Hot topics:** | **Hot topics:** | ||||||
|  |  | ||||||
| - [Added LoRA support](https://github.com/ggerganov/llama.cpp/pull/820) | - [Added LoRA support](https://github.com/ggerganov/llama.cpp/pull/820) | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user
	 Georgi Gerganov
					Georgi Gerganov