mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-29 08:41:22 +00:00 
			
		
		
		
	main : escape prompt for cfg_negative_prompt and consecutive inputs in main with interactive (#3623)
* infill tokens correction
* serverinfill tokens correction
* removing any leading whitespace from infill suffix and removing leeading space token from suffix when params.escape
* removing any leading whitespace from infill suffix and removing leeading space token from suffix when params.escape
* only rm when params.escape, rm space if possible which is added back or rm added space token
* only rm when params.escape, rm space if possible which is added back or rm added space token
* Revert "only rm when params.escape, rm space if possible which is added back or rm added space token"
This reverts commit 63ba0b621f.
* fix interactive prompt escaping and fix server infill leading space handling
* rm unnecessary bool check
* process escapes for neg prompt and interactive consec prompts
* removed unneccessary static string escape
			
			
This commit is contained in:
		| @@ -632,6 +632,7 @@ bool gpt_params_parse(int argc, char ** argv, gpt_params & params) { | ||||
|         process_escapes(params.prompt); | ||||
|         process_escapes(params.input_prefix); | ||||
|         process_escapes(params.input_suffix); | ||||
|         process_escapes(sparams.cfg_negative_prompt); | ||||
|         for (auto & antiprompt : params.antiprompt) { | ||||
|             process_escapes(antiprompt); | ||||
|         } | ||||
|   | ||||
		Reference in New Issue
	
	Block a user
	 vvhg1
					vvhg1