mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	 1c641e6aac
			
		
	
	1c641e6aac
	
	
	
		
			
			* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew
* server: update refs -> llama-server
gitignore llama-server
* server: simplify nix package
* main: update refs -> llama
fix examples/main ref
* main/server: fix targets
* update more names
* Update build.yml
* rm accidentally checked in bins
* update straggling refs
* Update .gitignore
* Update server-llm.sh
* main: target name -> llama-cli
* Prefix all example bins w/ llama-
* fix main refs
* rename {main->llama}-cmake-pkg binary
* prefix more cmake targets w/ llama-
* add/fix gbnf-validator subfolder to cmake
* sort cmake example subdirs
* rm bin files
* fix llama-lookup-* Makefile rules
* gitignore /llama-*
* rename Dockerfiles
* rename llama|main -> llama-cli; consistent RPM bin prefixes
* fix some missing -cli suffixes
* rename dockerfile w/ llama-cli
* rename(make): llama-baby-llama
* update dockerfile refs
* more llama-cli(.exe)
* fix test-eval-callback
* rename: llama-cli-cmake-pkg(.exe)
* address gbnf-validator unused fread warning (switched to C++ / ifstream)
* add two missing llama- prefixes
* Updating docs for eval-callback binary to use new `llama-` prefix.
* Updating a few lingering doc references for rename of main to llama-cli
* Updating `run-with-preset.py` to use new binary names.
Updating docs around `perplexity` binary rename.
* Updating documentation references for lookup-merge and export-lora
* Updating two small `main` references missed earlier in the finetune docs.
* Update apps.nix
* update grammar/README.md w/ new llama-* names
* update llama-rpc-server bin name + doc
* Revert "update llama-rpc-server bin name + doc"
This reverts commit e474ef1df4.
* add hot topic notice to README.md
* Update README.md
* Update README.md
* rename gguf-split & quantize bins refs in **/tests.sh
---------
Co-authored-by: HanClinto <hanclinto@gmail.com>
		
	
		
			
				
	
	
		
			31 lines
		
	
	
		
			851 B
		
	
	
	
		
			Bash
		
	
	
		
			Executable File
		
	
	
	
	
			
		
		
	
	
			31 lines
		
	
	
		
			851 B
		
	
	
	
		
			Bash
		
	
	
		
			Executable File
		
	
	
	
	
| #!/bin/bash
 | |
| set -e
 | |
| 
 | |
| MODEL=./models/ggml-vicuna-13b-1.1-q4_0.bin
 | |
| MODEL_NAME=Vicuna
 | |
| 
 | |
| # exec options
 | |
| prefix="Human: " # Ex. Vicuna uses "Human: "
 | |
| opts="--temp 0 -n 80" # additional flags
 | |
| nl='
 | |
| '
 | |
| introduction="You will be playing a game of Jeopardy. Simply answer the question in the correct format (Ex. What is Paris, or Who is George Washington)."
 | |
| 
 | |
| # file options
 | |
| question_file=./examples/jeopardy/questions.txt
 | |
| touch ./examples/jeopardy/results/$MODEL_NAME.txt
 | |
| output_file=./examples/jeopardy/results/$MODEL_NAME.txt
 | |
| 
 | |
| counter=1
 | |
| 
 | |
| echo 'Running'
 | |
| while IFS= read -r question
 | |
| do
 | |
|   exe_cmd="./llama-cli -p "\"$prefix$introduction$nl$prefix$question\"" "$opts" -m ""\"$MODEL\""" >> ""\"$output_file\""
 | |
|   echo $counter
 | |
|   echo "Current Question: $question"
 | |
|   eval "$exe_cmd"
 | |
|   echo -e "\n------" >> $output_file
 | |
|   counter=$((counter+1))
 | |
| done < "$question_file"
 |