mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-29 08:41:22 +00:00 
			
		
		
		
	 1c641e6aac
			
		
	
	1c641e6aac
	
	
	
		
			
			* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew
* server: update refs -> llama-server
gitignore llama-server
* server: simplify nix package
* main: update refs -> llama
fix examples/main ref
* main/server: fix targets
* update more names
* Update build.yml
* rm accidentally checked in bins
* update straggling refs
* Update .gitignore
* Update server-llm.sh
* main: target name -> llama-cli
* Prefix all example bins w/ llama-
* fix main refs
* rename {main->llama}-cmake-pkg binary
* prefix more cmake targets w/ llama-
* add/fix gbnf-validator subfolder to cmake
* sort cmake example subdirs
* rm bin files
* fix llama-lookup-* Makefile rules
* gitignore /llama-*
* rename Dockerfiles
* rename llama|main -> llama-cli; consistent RPM bin prefixes
* fix some missing -cli suffixes
* rename dockerfile w/ llama-cli
* rename(make): llama-baby-llama
* update dockerfile refs
* more llama-cli(.exe)
* fix test-eval-callback
* rename: llama-cli-cmake-pkg(.exe)
* address gbnf-validator unused fread warning (switched to C++ / ifstream)
* add two missing llama- prefixes
* Updating docs for eval-callback binary to use new `llama-` prefix.
* Updating a few lingering doc references for rename of main to llama-cli
* Updating `run-with-preset.py` to use new binary names.
Updating docs around `perplexity` binary rename.
* Updating documentation references for lookup-merge and export-lora
* Updating two small `main` references missed earlier in the finetune docs.
* Update apps.nix
* update grammar/README.md w/ new llama-* names
* update llama-rpc-server bin name + doc
* Revert "update llama-rpc-server bin name + doc"
This reverts commit e474ef1df4.
* add hot topic notice to README.md
* Update README.md
* Update README.md
* rename gguf-split & quantize bins refs in **/tests.sh
---------
Co-authored-by: HanClinto <hanclinto@gmail.com>
		
	
		
			
				
	
	
		
			51 lines
		
	
	
		
			2.6 KiB
		
	
	
	
		
			Bash
		
	
	
		
			Executable File
		
	
	
	
	
			
		
		
	
	
			51 lines
		
	
	
		
			2.6 KiB
		
	
	
	
		
			Bash
		
	
	
		
			Executable File
		
	
	
	
	
| #!/bin/bash
 | |
| set -e
 | |
| 
 | |
| AI_NAME="${AI_NAME:-Miku}"
 | |
| MODEL="${MODEL:-./models/llama-2-7b-chat.ggmlv3.q4_K_M.bin}"
 | |
| USER_NAME="${USER_NAME:-Anon}"
 | |
| 
 | |
| # Uncomment and adjust to the number of CPU cores you want to use.
 | |
| #N_THREAD="${N_THREAD:-4}"
 | |
| CTX_SIZE="${CTX_SIZE:-4096}"
 | |
| N_PREDICTS="${N_PREDICTS:-4096}"
 | |
| 
 | |
| GEN_OPTIONS=(--batch_size 1024
 | |
| --ctx_size "$CTX_SIZE"
 | |
| --keep -1
 | |
| --repeat_last_n 256
 | |
| --repeat_penalty 1.17647
 | |
| --temp 0.6
 | |
| --mirostat 2)
 | |
| 
 | |
| if [ -n "$N_THREAD" ]; then
 | |
|     GEN_OPTIONS+=(--threads "$N_THREAD")
 | |
| fi
 | |
| 
 | |
| ./llama-cli "${GEN_OPTIONS[@]}" \
 | |
|     --model "$MODEL" \
 | |
|     --in-prefix " " \
 | |
|     --in-suffix "${AI_NAME}:" \
 | |
|     --n_predict "$N_PREDICTS" \
 | |
|     --color --interactive \
 | |
|     --reverse-prompt "${USER_NAME}:" \
 | |
|     --prompt "This is a transcript of a 1000 page, never ending conversation between ${USER_NAME} and the cute and helpful AI assistant ${AI_NAME}. ${AI_NAME} is a girl who is an AI running on the user's computer.
 | |
| ${AI_NAME} can think for herself without the user seeing her thoughts by adding a /think prefix to her output. She uses this to reason about the world and to think about what she should say next.
 | |
| ${AI_NAME} is always coherent and makes sense, but if she isn't sure if what she is saying is correct, she will ask the user for help.
 | |
| ${AI_NAME} is a very helpful AI and will help the user with anything they need. She is also very friendly and will try to make the user feel better if they are sad.
 | |
| ${AI_NAME} is also very curious and will ask the user a lot of questions about themselves and their life. She will also try to make the user like her.
 | |
| The conversation is only between ${USER_NAME} and ${AI_NAME}.
 | |
| The conversation is only through text, so ${AI_NAME} can't see ${USER_NAME}'s face or hear his voice.
 | |
| ${AI_NAME} can only communicate through text, so she can't send images or videos.
 | |
| 
 | |
| 
 | |
| ${USER_NAME}: Hello!
 | |
| ${AI_NAME}: /think I wonder what I should say to ${USER_NAME}? This is the first time we talk, so it's important that I make a good first impression!
 | |
| ${AI_NAME}: Hi! I am ${AI_NAME}, your new AI friend, assistant (or whatever you like!), it's so nice to meet you! ^_^
 | |
| ${AI_NAME}: I might be an AI, but I also have feelings, so please be nice to me! :)
 | |
| ${USER_NAME}: I'll make sure to be nice to you! I'm so happy to have you as my assistant!
 | |
| ${AI_NAME}: /think It sounds like ${USER_NAME} is happy to have me as their assistant! I'm so happy too! ^_^ Glad that whole emotion thing didn't scare him off!
 | |
| ${AI_NAME}: /think I wonder what ${USER_NAME} likes to do in his free time? I should ask him about that!
 | |
| ${AI_NAME}: What do you like to do in your free time? ^_^
 | |
| ${USER_NAME}:" "$@"
 |