mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	readme : update ROCm Windows instructions (#4122)
* Update README.md * Update README.md Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> --------- Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
This commit is contained in:
		 Aaryaman Vasishta
					Aaryaman Vasishta
				
			
				
					committed by
					
						 GitHub
						GitHub
					
				
			
			
				
	
			
			
			 GitHub
						GitHub
					
				
			
						parent
						
							881800d1f0
						
					
				
				
					commit
					dfc7cd48b1
				
			
							
								
								
									
										12
									
								
								README.md
									
									
									
									
									
								
							
							
						
						
									
										12
									
								
								README.md
									
									
									
									
									
								
							| @@ -410,19 +410,27 @@ Building the program with BLAS support may lead to some performance improvements | |||||||
|   This provides BLAS acceleration on HIP-supported AMD GPUs. |   This provides BLAS acceleration on HIP-supported AMD GPUs. | ||||||
|   Make sure to have ROCm installed. |   Make sure to have ROCm installed. | ||||||
|   You can download it from your Linux distro's package manager or from here: [ROCm Quick Start (Linux)](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html). |   You can download it from your Linux distro's package manager or from here: [ROCm Quick Start (Linux)](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html). | ||||||
|   Windows support is coming soon... |  | ||||||
|  |  | ||||||
|   - Using `make`: |   - Using `make`: | ||||||
|     ```bash |     ```bash | ||||||
|     make LLAMA_HIPBLAS=1 |     make LLAMA_HIPBLAS=1 | ||||||
|     ``` |     ``` | ||||||
|   - Using `CMake`: |   - Using `CMake` for Linux: | ||||||
|     ```bash |     ```bash | ||||||
|     mkdir build |     mkdir build | ||||||
|     cd build |     cd build | ||||||
|     CC=/opt/rocm/llvm/bin/clang CXX=/opt/rocm/llvm/bin/clang++ cmake .. -DLLAMA_HIPBLAS=ON |     CC=/opt/rocm/llvm/bin/clang CXX=/opt/rocm/llvm/bin/clang++ cmake .. -DLLAMA_HIPBLAS=ON | ||||||
|     cmake --build . |     cmake --build . | ||||||
|     ``` |     ``` | ||||||
|  |   - Using `CMake` for Windows: | ||||||
|  |     ```bash | ||||||
|  |     mkdir build | ||||||
|  |     cd build | ||||||
|  |     cmake -G Ninja -DAMDGPU_TARGETS=gfx1100 -DLLAMA_HIPBLAS=ON -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ .. | ||||||
|  |     cmake --build . | ||||||
|  |     ``` | ||||||
|  |     Make sure that `AMDGPU_TARGETS` is set to the GPU arch you want to compile for. The above example uses `gfx1100` that corresponds to Radeon RX 7900XTX/XT/GRE. You can find a list of targets [here](https://llvm.org/docs/AMDGPUUsage.html#processors) | ||||||
|  |  | ||||||
|  |  | ||||||
|   The environment variable [`HIP_VISIBLE_DEVICES`](https://rocm.docs.amd.com/en/latest/understand/gpu_isolation.html#hip-visible-devices) can be used to specify which GPU(s) will be used. |   The environment variable [`HIP_VISIBLE_DEVICES`](https://rocm.docs.amd.com/en/latest/understand/gpu_isolation.html#hip-visible-devices) can be used to specify which GPU(s) will be used. | ||||||
|   If your GPU is not officially supported you can use the environment variable [`HSA_OVERRIDE_GFX_VERSION`] set to a similar GPU, for example 10.3.0 on RDNA2 or 11.0.0 on RDNA3. |   If your GPU is not officially supported you can use the environment variable [`HSA_OVERRIDE_GFX_VERSION`] set to a similar GPU, for example 10.3.0 on RDNA2 or 11.0.0 on RDNA3. | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user