mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-11-03 09:22:01 +00:00 
			
		
		
		
	* server: #5655 - continue to update other slots on embedding concurrent request. * server: tests: add multi users embeddings as fixed * server: tests: adding OAI compatible embedding concurrent endpoint * server: tests: adding OAI compatible embedding with multiple inputs
		
			
				
	
	
		
			5 lines
		
	
	
		
			83 B
		
	
	
	
		
			Gherkin
		
	
	
	
	
	
			
		
		
	
	
			5 lines
		
	
	
		
			83 B
		
	
	
	
		
			Gherkin
		
	
	
	
	
	
# List of ongoing issues
 | 
						|
@bug
 | 
						|
Feature: Issues
 | 
						|
  # No confirmed issue at the moment
 |