mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	Remove obsolete information from README
This commit is contained in:
		
							
								
								
									
										10
									
								
								README.md
									
									
									
									
									
								
							
							
						
						
									
										10
									
								
								README.md
									
									
									
									
									
								
							| @@ -17,7 +17,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ | ||||
| The main goal is to run the model using 4-bit quantization on a MacBook | ||||
|  | ||||
| - Plain C/C++ implementation without dependencies | ||||
| - Apple silicon first-class citizen - optimized via ARM NEON | ||||
| - Apple silicon first-class citizen - optimized via ARM NEON and Accelerate framework | ||||
| - AVX2 support for x86 architectures | ||||
| - Mixed F16 / F32 precision | ||||
| - 4-bit quantization support | ||||
| @@ -323,14 +323,6 @@ or with light image: | ||||
| docker run -v /llama/models:/models ghcr.io/ggerganov/llama.cpp:light -m /models/7B/ggml-model-q4_0.bin -p "Building a website can be done in 10 simple steps:" -n 512 | ||||
| ``` | ||||
|  | ||||
| ## Limitations | ||||
|  | ||||
| - Probably the token sampling can be improved | ||||
| - The Accelerate framework is actually currently unused since I found that for tensor shapes typical for the Decoder, | ||||
|   there is no benefit compared to the ARM_NEON intrinsics implementation. Of course, it's possible that I simply don't | ||||
|   know how to utilize it properly. But in any case, you can even disable it with `LLAMA_NO_ACCELERATE=1 make` and the | ||||
|   performance will be the same, since no BLAS calls are invoked by the current implementation | ||||
|  | ||||
| ### Contributing | ||||
|  | ||||
| - Contributors can open PRs | ||||
|   | ||||
		Reference in New Issue
	
	Block a user
	 Georgi Gerganov
					Georgi Gerganov