mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	readme : update hot topics about new LoRA functionality
This commit is contained in:
		| @@ -9,6 +9,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ | ||||
|  | ||||
| **Hot topics:** | ||||
|  | ||||
| - [Added LoRA support](https://github.com/ggerganov/llama.cpp/pull/820) | ||||
| - [Add GPU support to ggml](https://github.com/ggerganov/llama.cpp/discussions/915) | ||||
| - [Roadmap Apr 2023](https://github.com/ggerganov/llama.cpp/discussions/784) | ||||
|  | ||||
|   | ||||
		Reference in New Issue
	
	Block a user
	 Georgi Gerganov
					Georgi Gerganov