mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-11-03 09:22:01 +00:00 
			
		
		
		
	readme : incoming BREAKING CHANGE
This commit is contained in:
		
							
								
								
									
										12
									
								
								README.md
									
									
									
									
									
								
							
							
						
						
									
										12
									
								
								README.md
									
									
									
									
									
								
							@@ -9,13 +9,13 @@
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
 | 
					Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
 | 
				
			||||||
 | 
					
 | 
				
			||||||
**Hot topics:**
 | 
					### 🚧 Incoming breaking change + refactoring:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
- Simple web chat example: https://github.com/ggerganov/llama.cpp/pull/1998
 | 
					See PR https://github.com/ggerganov/llama.cpp/pull/2398 for more info.
 | 
				
			||||||
- k-quants now support super-block size of 64: https://github.com/ggerganov/llama.cpp/pull/2001
 | 
					
 | 
				
			||||||
- New roadmap: https://github.com/users/ggerganov/projects/7
 | 
					To devs: avoid making big changes to `llama.h` / `llama.cpp` until merged
 | 
				
			||||||
- Azure CI brainstorming: https://github.com/ggerganov/llama.cpp/discussions/1985
 | 
					
 | 
				
			||||||
- p1 : LLM-based code completion engine at the edge : https://github.com/ggml-org/p1/discussions/1
 | 
					----
 | 
				
			||||||
 | 
					
 | 
				
			||||||
<details>
 | 
					<details>
 | 
				
			||||||
  <summary>Table of Contents</summary>
 | 
					  <summary>Table of Contents</summary>
 | 
				
			||||||
 
 | 
				
			|||||||
		Reference in New Issue
	
	Block a user