mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	readme : update hot topics
This commit is contained in:
		| @@ -9,10 +9,8 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ | |||||||
|  |  | ||||||
| **Hot topics:** | **Hot topics:** | ||||||
|  |  | ||||||
|  | - [Roadmap May 2023](https://github.com/ggerganov/llama.cpp/discussions/1220) | ||||||
| - [New quantization methods](https://github.com/ggerganov/llama.cpp#quantization) | - [New quantization methods](https://github.com/ggerganov/llama.cpp#quantization) | ||||||
| - [Added LoRA support](https://github.com/ggerganov/llama.cpp/pull/820) |  | ||||||
| - [Add GPU support to ggml](https://github.com/ggerganov/llama.cpp/discussions/915) |  | ||||||
| - [Roadmap Apr 2023](https://github.com/ggerganov/llama.cpp/discussions/784) |  | ||||||
|  |  | ||||||
| ## Description | ## Description | ||||||
|  |  | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user
	 Georgi Gerganov
					Georgi Gerganov