mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	 0da5d86026
			
		
	
	0da5d86026
	
	
	
		
			
			* slot.can_batch_with * lora per request * test: force disable cache prompt * move can_batch_with check * fix condition * add slow test with llama 8b * update docs * move lora change task to queue * Apply suggestions from code review Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * lora_base * remove redundant check --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
		
			
				
	
	
		
			9 lines
		
	
	
		
			135 B
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
			
		
		
	
	
			9 lines
		
	
	
		
			135 B
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
| aiohttp~=3.9.3
 | |
| pytest~=8.3.3
 | |
| huggingface_hub~=0.23.2
 | |
| numpy~=1.26.4
 | |
| openai~=1.55.3
 | |
| prometheus-client~=0.20.0
 | |
| requests~=2.32.3
 | |
| wget~=3.2
 |