mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-11-03 09:22:01 +00:00 
			
		
		
		
	* BERT model graph construction (build_bert) * WordPiece tokenizer (llm_tokenize_wpm) * Add flag for non-causal attention models * Allow for models that only output embeddings * Support conversion of BERT models to GGUF * Based on prior work by @xyzhang626 and @skeskinen --------- Co-authored-by: Jared Van Bortel <jared@nomic.ai> Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
		
			
				
	
	
		
			4 lines
		
	
	
		
			45 B
		
	
	
	
		
			INI
		
	
	
	
	
	
			
		
		
	
	
			4 lines
		
	
	
		
			45 B
		
	
	
	
		
			INI
		
	
	
	
	
	
[flake8]
 | 
						|
max-line-length = 125
 | 
						|
ignore = W503
 |