mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	readme : update hot topics + model links (#3399)
This commit is contained in:
		| @@ -11,7 +11,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ | |||||||
|  |  | ||||||
| ### Hot topics | ### Hot topics | ||||||
|  |  | ||||||
| - Parallel decoding + continuous batching support incoming: [#3228](https://github.com/ggerganov/llama.cpp/pull/3228) \ | - Parallel decoding + continuous batching support added: [#3228](https://github.com/ggerganov/llama.cpp/pull/3228) \ | ||||||
|   **Devs should become familiar with the new API** |   **Devs should become familiar with the new API** | ||||||
| - Local Falcon 180B inference on Mac Studio | - Local Falcon 180B inference on Mac Studio | ||||||
|  |  | ||||||
| @@ -92,7 +92,8 @@ as the main playground for developing new features for the [ggml](https://github | |||||||
| - [X] [WizardLM](https://github.com/nlpxucan/WizardLM) | - [X] [WizardLM](https://github.com/nlpxucan/WizardLM) | ||||||
| - [X] [Baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B) and its derivations (such as [baichuan-7b-sft](https://huggingface.co/hiyouga/baichuan-7b-sft)) | - [X] [Baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B) and its derivations (such as [baichuan-7b-sft](https://huggingface.co/hiyouga/baichuan-7b-sft)) | ||||||
| - [X] [Aquila-7B](https://huggingface.co/BAAI/Aquila-7B) / [AquilaChat-7B](https://huggingface.co/BAAI/AquilaChat-7B) | - [X] [Aquila-7B](https://huggingface.co/BAAI/Aquila-7B) / [AquilaChat-7B](https://huggingface.co/BAAI/AquilaChat-7B) | ||||||
| - [X] Mistral AI v0.1 | - [X] [Starcoder models](https://github.com/ggerganov/llama.cpp/pull/3187) | ||||||
|  | - [X] [Mistral AI v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | ||||||
|  |  | ||||||
| **Bindings:** | **Bindings:** | ||||||
|  |  | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user
	 BarfingLemurs
					BarfingLemurs