mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	Updating docs for eval-callback binary to use new llama- prefix.
				
					
				
			This commit is contained in:
		| @@ -100,7 +100,7 @@ Have a look at existing implementation like `build_llama`, `build_dbrx` or `buil | |||||||
|  |  | ||||||
| When implementing a new graph, please note that the underlying `ggml` backends might not support them all, support for missing backend operations can be added in another PR. | When implementing a new graph, please note that the underlying `ggml` backends might not support them all, support for missing backend operations can be added in another PR. | ||||||
|  |  | ||||||
| Note: to debug the inference graph: you can use [eval-callback](../examples/eval-callback). | Note: to debug the inference graph: you can use [llama-eval-callback](../examples/eval-callback). | ||||||
|  |  | ||||||
| ## GGUF specification | ## GGUF specification | ||||||
|  |  | ||||||
|   | |||||||
| @@ -6,7 +6,7 @@ It simply prints to the console all operations and tensor data. | |||||||
| Usage: | Usage: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| eval-callback \ | llama-eval-callback \ | ||||||
|   --hf-repo ggml-org/models \ |   --hf-repo ggml-org/models \ | ||||||
|   --hf-file phi-2/ggml-model-q4_0.gguf \ |   --hf-file phi-2/ggml-model-q4_0.gguf \ | ||||||
|   --model phi-2-q4_0.gguf \ |   --model phi-2-q4_0.gguf \ | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user
	 HanClinto
					HanClinto