mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-28 08:31:25 +00:00 
			
		
		
		
	devops : RPM Specs (#2723)
* Create llama-cpp.srpm * Rename llama-cpp.srpm to llama-cpp.srpm.spec Correcting extension. * Tested spec success. * Update llama-cpp.srpm.spec * Create lamma-cpp-cublas.srpm.spec * Create lamma-cpp-clblast.srpm.spec * Update lamma-cpp-cublas.srpm.spec Added BuildRequires * Moved to devops dir
This commit is contained in:
		
							
								
								
									
										58
									
								
								.devops/lamma-cpp-clblast.srpm.spec
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										58
									
								
								.devops/lamma-cpp-clblast.srpm.spec
									
									
									
									
									
										Normal file
									
								
							| @@ -0,0 +1,58 @@ | |||||||
|  | # SRPM for building from source and packaging an RPM for RPM-based distros. | ||||||
|  | # https://fedoraproject.org/wiki/How_to_create_an_RPM_package | ||||||
|  | # Built and maintained by John Boero - boeroboy@gmail.com | ||||||
|  | # In honor of Seth Vidal https://www.redhat.com/it/blog/thank-you-seth-vidal | ||||||
|  |  | ||||||
|  | # Notes for llama.cpp: | ||||||
|  | # 1. Tags are currently based on hash - which will not sort asciibetically. | ||||||
|  | #    We need to declare standard versioning if people want to sort latest releases. | ||||||
|  | # 2. Builds for CUDA/OpenCL support are separate, with different depenedencies. | ||||||
|  | # 3. NVidia's developer repo must be enabled with nvcc, cublas, clblas, etc installed. | ||||||
|  | #    Example: https://developer.download.nvidia.com/compute/cuda/repos/fedora37/x86_64/cuda-fedora37.repo | ||||||
|  | # 4. OpenCL/CLBLAST support simply requires the ICD loader and basic opencl libraries. | ||||||
|  | #    It is up to the user to install the correct vendor-specific support. | ||||||
|  |  | ||||||
|  | Name:           llama.cpp-clblast | ||||||
|  | Version:        master | ||||||
|  | Release:        1%{?dist} | ||||||
|  | Summary:        OpenCL Inference of LLaMA model in pure C/C++ | ||||||
|  | License:        MIT | ||||||
|  | Source0:        https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz | ||||||
|  | BuildRequires:  coreutils make gcc-c++ git mesa-libOpenCL-devel | ||||||
|  | URL:            https://github.com/ggerganov/llama.cpp | ||||||
|  |  | ||||||
|  | %define debug_package %{nil} | ||||||
|  | %define source_date_epoch_from_changelog 0 | ||||||
|  |  | ||||||
|  | %description | ||||||
|  | CPU inference for Meta's Lllama2 models using default options. | ||||||
|  |  | ||||||
|  | %prep | ||||||
|  | %setup -n llama.cpp-master | ||||||
|  |  | ||||||
|  | %build | ||||||
|  | make -j LLAMA_CLBLAST=1 | ||||||
|  |  | ||||||
|  | %install | ||||||
|  | mkdir -p %{buildroot}%{_bindir}/ | ||||||
|  | cp -p main %{buildroot}%{_bindir}/llamacppclblast | ||||||
|  | cp -p server %{buildroot}%{_bindir}/llamacppclblastserver | ||||||
|  | cp -p simple %{buildroot}%{_bindir}/llamacppclblastsimple | ||||||
|  |  | ||||||
|  | %clean | ||||||
|  | rm -rf %{buildroot} | ||||||
|  | rm -rf %{_builddir}/* | ||||||
|  |  | ||||||
|  | %files | ||||||
|  | %{_bindir}/llamacppclblast | ||||||
|  | %{_bindir}/llamacppclblastserver | ||||||
|  | %{_bindir}/llamacppclblastsimple | ||||||
|  |  | ||||||
|  | %pre | ||||||
|  |  | ||||||
|  | %post | ||||||
|  |  | ||||||
|  | %preun | ||||||
|  | %postun | ||||||
|  |  | ||||||
|  | %changelog | ||||||
							
								
								
									
										59
									
								
								.devops/lamma-cpp-cublas.srpm.spec
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										59
									
								
								.devops/lamma-cpp-cublas.srpm.spec
									
									
									
									
									
										Normal file
									
								
							| @@ -0,0 +1,59 @@ | |||||||
|  | # SRPM for building from source and packaging an RPM for RPM-based distros. | ||||||
|  | # https://fedoraproject.org/wiki/How_to_create_an_RPM_package | ||||||
|  | # Built and maintained by John Boero - boeroboy@gmail.com | ||||||
|  | # In honor of Seth Vidal https://www.redhat.com/it/blog/thank-you-seth-vidal | ||||||
|  |  | ||||||
|  | # Notes for llama.cpp: | ||||||
|  | # 1. Tags are currently based on hash - which will not sort asciibetically. | ||||||
|  | #    We need to declare standard versioning if people want to sort latest releases. | ||||||
|  | # 2. Builds for CUDA/OpenCL support are separate, with different depenedencies. | ||||||
|  | # 3. NVidia's developer repo must be enabled with nvcc, cublas, clblas, etc installed. | ||||||
|  | #    Example: https://developer.download.nvidia.com/compute/cuda/repos/fedora37/x86_64/cuda-fedora37.repo | ||||||
|  | # 4. OpenCL/CLBLAST support simply requires the ICD loader and basic opencl libraries. | ||||||
|  | #    It is up to the user to install the correct vendor-specific support. | ||||||
|  |  | ||||||
|  | Name:           llama.cpp-cublas | ||||||
|  | Version:        master | ||||||
|  | Release:        1%{?dist} | ||||||
|  | Summary:        CPU Inference of LLaMA model in pure C/C++ (no CUDA/OpenCL) | ||||||
|  | License:        MIT | ||||||
|  | Source0:        https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz | ||||||
|  | BuildRequires:  coreutils make gcc-c++ git cuda-toolkit | ||||||
|  | Requires:       cuda-toolkit | ||||||
|  | URL:            https://github.com/ggerganov/llama.cpp | ||||||
|  |  | ||||||
|  | %define debug_package %{nil} | ||||||
|  | %define source_date_epoch_from_changelog 0 | ||||||
|  |  | ||||||
|  | %description | ||||||
|  | CPU inference for Meta's Lllama2 models using default options. | ||||||
|  |  | ||||||
|  | %prep | ||||||
|  | %setup -n llama.cpp-master | ||||||
|  |  | ||||||
|  | %build | ||||||
|  | make -j LLAMA_CUBLAS=1 | ||||||
|  |  | ||||||
|  | %install | ||||||
|  | mkdir -p %{buildroot}%{_bindir}/ | ||||||
|  | cp -p main %{buildroot}%{_bindir}/llamacppcublas | ||||||
|  | cp -p server %{buildroot}%{_bindir}/llamacppcublasserver | ||||||
|  | cp -p simple %{buildroot}%{_bindir}/llamacppcublassimple | ||||||
|  |  | ||||||
|  | %clean | ||||||
|  | rm -rf %{buildroot} | ||||||
|  | rm -rf %{_builddir}/* | ||||||
|  |  | ||||||
|  | %files | ||||||
|  | %{_bindir}/llamacppcublas | ||||||
|  | %{_bindir}/llamacppcublasserver | ||||||
|  | %{_bindir}/llamacppcublassimple | ||||||
|  |  | ||||||
|  | %pre | ||||||
|  |  | ||||||
|  | %post | ||||||
|  |  | ||||||
|  | %preun | ||||||
|  | %postun | ||||||
|  |  | ||||||
|  | %changelog | ||||||
							
								
								
									
										58
									
								
								.devops/llama-cpp.srpm.spec
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										58
									
								
								.devops/llama-cpp.srpm.spec
									
									
									
									
									
										Normal file
									
								
							| @@ -0,0 +1,58 @@ | |||||||
|  | # SRPM for building from source and packaging an RPM for RPM-based distros. | ||||||
|  | # https://fedoraproject.org/wiki/How_to_create_an_RPM_package | ||||||
|  | # Built and maintained by John Boero - boeroboy@gmail.com | ||||||
|  | # In honor of Seth Vidal https://www.redhat.com/it/blog/thank-you-seth-vidal | ||||||
|  |  | ||||||
|  | # Notes for llama.cpp: | ||||||
|  | # 1. Tags are currently based on hash - which will not sort asciibetically. | ||||||
|  | #    We need to declare standard versioning if people want to sort latest releases. | ||||||
|  | # 2. Builds for CUDA/OpenCL support are separate, with different depenedencies. | ||||||
|  | # 3. NVidia's developer repo must be enabled with nvcc, cublas, clblas, etc installed. | ||||||
|  | #    Example: https://developer.download.nvidia.com/compute/cuda/repos/fedora37/x86_64/cuda-fedora37.repo | ||||||
|  | # 4. OpenCL/CLBLAST support simply requires the ICD loader and basic opencl libraries. | ||||||
|  | #    It is up to the user to install the correct vendor-specific support. | ||||||
|  |  | ||||||
|  | Name:           llama.cpp | ||||||
|  | Version:        master | ||||||
|  | Release:        1%{?dist} | ||||||
|  | Summary:        CPU Inference of LLaMA model in pure C/C++ (no CUDA/OpenCL) | ||||||
|  | License:        MIT | ||||||
|  | Source0:        https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz | ||||||
|  | BuildRequires:  coreutils make gcc-c++ git | ||||||
|  | URL:            https://github.com/ggerganov/llama.cpp | ||||||
|  |  | ||||||
|  | %define debug_package %{nil} | ||||||
|  | %define source_date_epoch_from_changelog 0 | ||||||
|  |  | ||||||
|  | %description | ||||||
|  | CPU inference for Meta's Lllama2 models using default options. | ||||||
|  |  | ||||||
|  | %prep | ||||||
|  | %autosetup | ||||||
|  |  | ||||||
|  | %build | ||||||
|  | make -j | ||||||
|  |  | ||||||
|  | %install | ||||||
|  | mkdir -p %{buildroot}%{_bindir}/ | ||||||
|  | cp -p main %{buildroot}%{_bindir}/llamacpp | ||||||
|  | cp -p server %{buildroot}%{_bindir}/llamacppserver | ||||||
|  | cp -p simple %{buildroot}%{_bindir}/llamacppsimple | ||||||
|  |  | ||||||
|  | %clean | ||||||
|  | rm -rf %{buildroot} | ||||||
|  | rm -rf %{_builddir}/* | ||||||
|  |  | ||||||
|  | %files | ||||||
|  | %{_bindir}/llamacpp | ||||||
|  | %{_bindir}/llamacppserver | ||||||
|  | %{_bindir}/llamacppsimple | ||||||
|  |  | ||||||
|  | %pre | ||||||
|  |  | ||||||
|  | %post | ||||||
|  |  | ||||||
|  | %preun | ||||||
|  | %postun | ||||||
|  |  | ||||||
|  | %changelog | ||||||
		Reference in New Issue
	
	Block a user
	 JohnnyB
					JohnnyB