mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	Tidy Android Instructions README.md (#7016)
* Tidy Android Instructions README.md Remove CLBlast instructions(outdated), added OpenBlas. * don't assume git is installed Added apt install git, so that git clone works * removed OpenBlas Linked to Linux build instructions * fix typo Remove word "run" * correct style Co-authored-by: slaren <slarengh@gmail.com> * correct grammar Co-authored-by: slaren <slarengh@gmail.com> * delete reference to Android API * remove Fdroid reference, link directly to Termux Fdroid is not required Co-authored-by: slaren <slarengh@gmail.com> * Update README.md Co-authored-by: slaren <slarengh@gmail.com> --------- Co-authored-by: slaren <slarengh@gmail.com>
This commit is contained in:
		
							
								
								
									
										44
									
								
								README.md
									
									
									
									
									
								
							
							
						
						
									
										44
									
								
								README.md
									
									
									
									
									
								
							| @@ -977,48 +977,20 @@ Here is a demo of an interactive session running on Pixel 5 phone: | |||||||
|  |  | ||||||
| https://user-images.githubusercontent.com/271616/225014776-1d567049-ad71-4ef2-b050-55b0b3b9274c.mp4 | https://user-images.githubusercontent.com/271616/225014776-1d567049-ad71-4ef2-b050-55b0b3b9274c.mp4 | ||||||
|  |  | ||||||
| #### Building the Project using Termux (F-Droid) | #### Build on Android using Termux | ||||||
| Termux from F-Droid offers an alternative route to execute the project on an Android device. This method empowers you to construct the project right from within the terminal, negating the requirement for a rooted device or SD Card. | [Termux](https://github.com/termux/termux-app#installation) is an alternative to execute `llama.cpp` on an Android device (no root required). | ||||||
|  |  | ||||||
| Outlined below are the directives for installing the project using OpenBLAS and CLBlast. This combination is specifically designed to deliver peak performance on recent devices that feature a GPU. |  | ||||||
|  |  | ||||||
| If you opt to utilize OpenBLAS, you'll need to install the corresponding package. |  | ||||||
| ``` | ``` | ||||||
| apt install libopenblas | apt update && apt upgrade -y | ||||||
|  | apt install git | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| Subsequently, if you decide to incorporate CLBlast, you'll first need to install the requisite OpenCL packages: | It's recommended to move your model inside the `~/` directory for best performance: | ||||||
| ``` | ``` | ||||||
| apt install ocl-icd opencl-headers opencl-clhpp clinfo | cd storage/downloads | ||||||
|  | mv model.gguf ~/ | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| In order to compile CLBlast, you'll need to first clone the respective Git repository, which can be found at this URL: https://github.com/CNugteren/CLBlast. Alongside this, clone this repository into your home directory. Once this is done, navigate to the CLBlast folder and execute the commands detailed below: | [Follow the Linux build instructions](https://github.com/ggerganov/llama.cpp#build) to build `llama.cpp`. | ||||||
| ``` |  | ||||||
| cmake . |  | ||||||
| make |  | ||||||
| cp libclblast.so* $PREFIX/lib |  | ||||||
| cp ./include/clblast.h ../llama.cpp |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| Following the previous steps, navigate to the LlamaCpp directory. To compile it with OpenBLAS and CLBlast, execute the command provided below: |  | ||||||
| ``` |  | ||||||
| cp /data/data/com.termux/files/usr/include/openblas/cblas.h . |  | ||||||
| cp /data/data/com.termux/files/usr/include/openblas/openblas_config.h . |  | ||||||
| make LLAMA_CLBLAST=1 //(sometimes you need to run this command twice) |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| Upon completion of the aforementioned steps, you will have successfully compiled the project. To run it using CLBlast, a slight adjustment is required: a command must be issued to direct the operations towards your device's physical GPU, rather than the virtual one. The necessary command is detailed below: |  | ||||||
| ``` |  | ||||||
| GGML_OPENCL_PLATFORM=0 |  | ||||||
| GGML_OPENCL_DEVICE=0 |  | ||||||
| export LD_LIBRARY_PATH=/vendor/lib64:$LD_LIBRARY_PATH |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| (Note: some Android devices, like the Zenfone 8, need the following command instead - "export LD_LIBRARY_PATH=/system/vendor/lib64:$LD_LIBRARY_PATH". Source: https://www.reddit.com/r/termux/comments/kc3ynp/opencl_working_in_termux_more_in_comments/ ) |  | ||||||
|  |  | ||||||
| For easy and swift re-execution, consider documenting this final part in a .sh script file. This will enable you to rerun the process with minimal hassle. |  | ||||||
|  |  | ||||||
| Place your desired model into the `~/llama.cpp/models/` directory and execute the `./main (...)` script. |  | ||||||
|  |  | ||||||
| ### Docker | ### Docker | ||||||
|  |  | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user
	 Jeximo
					Jeximo