mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	 9a390c4829
			
		
	
	9a390c4829
	
	
	
		
			
			* add constructor to initialize server_context::batch, preventing destructor's call to llama_batch_free from causing an invalid free() * Update tools/server/server.cpp Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com> * use C++11 initializer syntax * switch from Copy-list-initialization to Direct-list-initialization --------- Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>
		
			
				
	
	
	
		
			190 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	
			190 KiB