mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	Minor style changes
This commit is contained in:
		| @@ -178,13 +178,15 @@ If you want a more ChatGPT-like experience, you can run in interactive mode by p | ||||
| In this mode, you can always interrupt generation by pressing Ctrl+C and enter one or more lines of text which will be converted into tokens and appended to the current context. You can also specify a *reverse prompt* with the parameter `-r "reverse prompt string"`. This will result in user input being prompted whenever the exact tokens of the reverse prompt string are encountered in the generation. A typical use is to use a prompt which makes LLaMa emulate a chat between multiple users, say Alice and Bob, and pass `-r "Alice:"`. | ||||
|  | ||||
| Here is an example few-shot interaction, invoked with the command | ||||
| ``` | ||||
|  | ||||
| ```bash | ||||
| # default arguments using 7B model | ||||
| ./chat.sh | ||||
|  | ||||
| # custom arguments using 13B model | ||||
| ./main -m ./models/13B/ggml-model-q4_0.bin -n 256 --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-bob.txt | ||||
| ``` | ||||
|  | ||||
| Note the use of `--color` to distinguish between user input and generated text. | ||||
|  | ||||
|  | ||||
|   | ||||
		Reference in New Issue
	
	Block a user
	 Georgi Gerganov
					Georgi Gerganov