mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	readme : add link to new k-quants for visibility
This commit is contained in:
		| @@ -11,6 +11,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ | |||||||
|  |  | ||||||
| **Hot topics:** | **Hot topics:** | ||||||
|  |  | ||||||
|  | - k-quants now support super-block size of 64: https://github.com/ggerganov/llama.cpp/pull/2001 | ||||||
| - New roadmap: https://github.com/users/ggerganov/projects/7 | - New roadmap: https://github.com/users/ggerganov/projects/7 | ||||||
| - Azure CI brainstorming: https://github.com/ggerganov/llama.cpp/discussions/1985 | - Azure CI brainstorming: https://github.com/ggerganov/llama.cpp/discussions/1985 | ||||||
| - p1 : LLM-based code completion engine at the edge : https://github.com/ggml-org/p1/discussions/1 | - p1 : LLM-based code completion engine at the edge : https://github.com/ggml-org/p1/discussions/1 | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user
	 Georgi Gerganov
					Georgi Gerganov