mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	 75cd4c7729
			
		
	
	75cd4c7729
	
	
	
		
			
			* ci: bench: support sse and fix prompt processing time server: add tokens usage in stream mode * ci: bench: README.md EOL * ci: bench: remove total pp and tg as it is not accurate * ci: bench: fix case when there is no token generated * ci: bench: change to the 95 percentile for pp and tg as it is closer to what the server exports in metrics * ci: bench: fix finish reason rate
		
			
				
	
	
	
		
			5.6 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	
			5.6 KiB