mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	readme : update hot topics
This commit is contained in:
		| @@ -20,7 +20,8 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) | |||||||
|  |  | ||||||
| ### Hot topics | ### Hot topics | ||||||
|  |  | ||||||
| - **MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387** | - **BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920** | ||||||
|  | - MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387 | ||||||
| - Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404 | - Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404 | ||||||
| - Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225 | - Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225 | ||||||
| - Multi-GPU pipeline parallelism support https://github.com/ggerganov/llama.cpp/pull/6017 | - Multi-GPU pipeline parallelism support https://github.com/ggerganov/llama.cpp/pull/6017 | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user
	 Georgi Gerganov
					Georgi Gerganov