mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	 d6bd4d46dd
			
		
	
	d6bd4d46dd
	
	
	
		
			
			* llama : support StableLM 2 1.6B
* convert : fix Qwen's set_vocab wrongly naming all special tokens [PAD{id}]
* convert : refactor Qwen's set_vocab to use it for StableLM 2 too
* nix : add tiktoken to llama-python-extra
* convert : use presence of tokenizer.json to determine StableLM tokenizer loader
It's a less arbitrary heuristic than the vocab size.
		
	
		
			
				
	
	
	
		
			58 KiB
		
	
	
	
	
		
			Executable File
		
	
	
	
	
			
		
		
	
	
			58 KiB
		
	
	
	
	
		
			Executable File