mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	[SYCL]set context default value to avoid memory issue, update guide (#9476)
* set context default to avoid memory issue, update guide * Update docs/backend/SYCL.md Co-authored-by: Meng, Hengyu <hengyu.meng@intel.com> --------- Co-authored-by: arthw <14088817+arthw@users.noreply.github.com> Co-authored-by: Meng, Hengyu <hengyu.meng@intel.com>
This commit is contained in:
		| @@ -636,6 +636,14 @@ use 1 SYCL GPUs: [0] with Max compute units:512 | |||||||
|  |  | ||||||
|   It's same for other projects including llama.cpp SYCL backend. |   It's same for other projects including llama.cpp SYCL backend. | ||||||
|  |  | ||||||
|  | - Meet issue: `Native API failed. Native API returns: -6 (PI_ERROR_OUT_OF_HOST_MEMORY) -6 (PI_ERROR_OUT_OF_HOST_MEMORY) -999 (UNKNOWN PI error)` or `failed to allocate SYCL0 buffer` | ||||||
|  |  | ||||||
|  |   Device Memory is not enough. | ||||||
|  |  | ||||||
|  |   |Reason|Solution| | ||||||
|  |   |-|-| | ||||||
|  |   |Default Context is too big. It leads to more memory usage.|Set `-c 8192` or smaller value.| | ||||||
|  |   |Model is big and require more memory than device's.|Choose smaller quantized model, like Q5 -> Q4;<br>Use more than one devices to load model.| | ||||||
|  |  | ||||||
| ### **GitHub contribution**: | ### **GitHub contribution**: | ||||||
| Please add the **[SYCL]** prefix/tag in issues/PRs titles to help the SYCL-team check/address them without delay. | Please add the **[SYCL]** prefix/tag in issues/PRs titles to help the SYCL-team check/address them without delay. | ||||||
|   | |||||||
| @@ -11,16 +11,17 @@ source /opt/intel/oneapi/setvars.sh | |||||||
| #ZES_ENABLE_SYSMAN=1, Support to get free memory of GPU by sycl::aspect::ext_intel_free_memory. Recommended to use when --split-mode = layer. | #ZES_ENABLE_SYSMAN=1, Support to get free memory of GPU by sycl::aspect::ext_intel_free_memory. Recommended to use when --split-mode = layer. | ||||||
|  |  | ||||||
| INPUT_PROMPT="Building a website can be done in 10 simple steps:\nStep 1:" | INPUT_PROMPT="Building a website can be done in 10 simple steps:\nStep 1:" | ||||||
| MODEL_FILE=llama-2-7b.Q4_0.gguf | MODEL_FILE=models/llama-2-7b.Q4_0.gguf | ||||||
| NGL=33 | NGL=33 | ||||||
|  | CONEXT=8192 | ||||||
|  |  | ||||||
| if [ $# -gt 0 ]; then | if [ $# -gt 0 ]; then | ||||||
|     GGML_SYCL_DEVICE=$1 |     GGML_SYCL_DEVICE=$1 | ||||||
|     echo "use $GGML_SYCL_DEVICE as main GPU" |     echo "use $GGML_SYCL_DEVICE as main GPU" | ||||||
|     #use signle GPU only |     #use signle GPU only | ||||||
|     ZES_ENABLE_SYSMAN=1 ./build/bin/llama-cli -m models/${MODEL_FILE} -p "${INPUT_PROMPT}" -n 400 -e -ngl ${NGL} -s 0 -mg $GGML_SYCL_DEVICE -sm none |     ZES_ENABLE_SYSMAN=1 ./build/bin/llama-cli -m ${MODEL_FILE} -p "${INPUT_PROMPT}" -n 400 -e -ngl ${NGL} -s 0 -c ${CONEXT} -mg $GGML_SYCL_DEVICE -sm none | ||||||
|  |  | ||||||
| else | else | ||||||
|     #use multiple GPUs with same max compute units |     #use multiple GPUs with same max compute units | ||||||
|     ZES_ENABLE_SYSMAN=1 ./build/bin/llama-cli -m models/${MODEL_FILE} -p "${INPUT_PROMPT}" -n 400 -e -ngl ${NGL} -s 0 |     ZES_ENABLE_SYSMAN=1 ./build/bin/llama-cli -m ${MODEL_FILE} -p "${INPUT_PROMPT}" -n 400 -e -ngl ${NGL} -s 0 -c ${CONEXT} | ||||||
| fi | fi | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user
	 Neo Zhang Jianyu
					Neo Zhang Jianyu