mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	 dd6e6d0b6a
			
		
	
	dd6e6d0b6a
	
	
	
		
			
			* vocab : prevent stack overflow in tokenize * vocab : return error instead of aborting on oversized token count * vocab : INT32_MIN from llama_tokenize on overflow
		
			
				
	
	
	
		
			50 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	
			50 KiB