mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	readme : update hot topics
This commit is contained in:
		| @@ -9,9 +9,11 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ | |||||||
|  |  | ||||||
| **Hot topics:** | **Hot topics:** | ||||||
|  |  | ||||||
| - Quantization formats `Q4` and `Q8` have changed again (19 May) - [(info)](https://github.com/ggerganov/llama.cpp/pull/1508) | - GPU support with Metal (Apple Silicon): https://github.com/ggerganov/llama.cpp/pull/1642 | ||||||
| - Quantization formats `Q4` and `Q5` have changed - requantize any old models [(info)](https://github.com/ggerganov/llama.cpp/pull/1405) | - High-quality 2,3,4,5,6-bit quantization: https://github.com/ggerganov/llama.cpp/pull/1684 | ||||||
| - [Roadmap May 2023](https://github.com/ggerganov/llama.cpp/discussions/1220) | - Multi-GPU support: https://github.com/ggerganov/llama.cpp/pull/1607 | ||||||
|  | - Training LLaMA models from scratch: https://github.com/ggerganov/llama.cpp/pull/1652 | ||||||
|  | - CPU threading improvements: https://github.com/ggerganov/llama.cpp/pull/1632 | ||||||
|  |  | ||||||
| <details> | <details> | ||||||
|   <summary>Table of Contents</summary> |   <summary>Table of Contents</summary> | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user
	 Georgi Gerganov
					Georgi Gerganov