mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	Add notice about pending change
This commit is contained in:
		
							
								
								
									
										12
									
								
								README.md
									
									
									
									
									
								
							
							
						
						
									
										12
									
								
								README.md
									
									
									
									
									
								
							| @@ -5,15 +5,21 @@ | ||||
|  | ||||
| Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ | ||||
|  | ||||
| --- | ||||
|  | ||||
| **TEMPORARY NOTICE:** | ||||
| Big code change incoming: https://github.com/ggerganov/llama.cpp/pull/370 | ||||
|  | ||||
| Do not merge stuff until we merge this. Probably merge will happen on March 22 ~6:00am UTC | ||||
|  | ||||
| --- | ||||
|  | ||||
| **Hot topics:** | ||||
|  | ||||
| - [Added Alpaca support](https://github.com/ggerganov/llama.cpp#instruction-mode-with-alpaca) | ||||
| - Cache input prompts for faster initialization: https://github.com/ggerganov/llama.cpp/issues/64 | ||||
| - Create a `llama.cpp` logo: https://github.com/ggerganov/llama.cpp/issues/105 | ||||
|  | ||||
| **TEMPORARY NOTICE:** | ||||
| If you're updating to the latest master, you will need to regenerate your model files as the format has changed. | ||||
|  | ||||
| ## Description | ||||
|  | ||||
| The main goal is to run the model using 4-bit quantization on a MacBook | ||||
|   | ||||
		Reference in New Issue
	
	Block a user
	 Georgi Gerganov
					Georgi Gerganov