mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	readme : add rpc backend (#12842)
This commit is contained in:
		| @@ -9,13 +9,6 @@ | |||||||
|  |  | ||||||
| Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) in pure C/C++ | Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) in pure C/C++ | ||||||
|  |  | ||||||
| > [!IMPORTANT] |  | ||||||
| > New `llama.cpp` package location: [ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp/pkgs/container/llama.cpp) |  | ||||||
| > |  | ||||||
| > Update your container URLs to: `ghcr.io/ggml-org/llama.cpp` |  | ||||||
| > |  | ||||||
| > More info: https://github.com/ggml-org/llama.cpp/discussions/11801 |  | ||||||
|  |  | ||||||
| ## Recent API changes | ## Recent API changes | ||||||
|  |  | ||||||
| - [Changelog for `libllama` API](https://github.com/ggml-org/llama.cpp/issues/9289) | - [Changelog for `libllama` API](https://github.com/ggml-org/llama.cpp/issues/9289) | ||||||
| @@ -247,6 +240,7 @@ Instructions for adding support for new models: [HOWTO-add-model.md](docs/develo | |||||||
| | [Vulkan](docs/build.md#vulkan) | GPU | | | [Vulkan](docs/build.md#vulkan) | GPU | | ||||||
| | [CANN](docs/build.md#cann) | Ascend NPU | | | [CANN](docs/build.md#cann) | Ascend NPU | | ||||||
| | [OpenCL](docs/backend/OPENCL.md) | Adreno GPU | | | [OpenCL](docs/backend/OPENCL.md) | Adreno GPU | | ||||||
|  | | [RPC](https://github.com/ggml-org/llama.cpp/tree/master/examples/rpc) | All | | ||||||
|  |  | ||||||
| ## Building the project | ## Building the project | ||||||
|  |  | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user
	 Georgi Gerganov
					Georgi Gerganov