mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-11-03 09:22:01 +00:00 
			
		
		
		
	llama : add PLM GGUF Conversion & Inference Support (#12457)
* add edgellm model arch[conversation feature doesn't work] * remove output.weight layer for edgellm arch * [Model] update the name of the model * update the name of model arch in convert gguf * [Model] Refarctor the model arch into llama-model * [Bug] Fix the bug in create attn kv * [Code] Fix editorconfig erros * [Code] Remove Trailing whitespace * [Code] Remove Trailing whitespace * [Code] Change the order of model arch in list * [Code] Fix flake8 Lint errors * Remove trailing white space * [Code] Remove call in model arch
This commit is contained in:
		@@ -44,6 +44,7 @@ enum llm_type {
 | 
			
		||||
    LLM_TYPE_1_4B,
 | 
			
		||||
    LLM_TYPE_1_5B,
 | 
			
		||||
    LLM_TYPE_1_6B,
 | 
			
		||||
    LLM_TYPE_1_8B,
 | 
			
		||||
    LLM_TYPE_2B,
 | 
			
		||||
    LLM_TYPE_2_8B,
 | 
			
		||||
    LLM_TYPE_2_9B,
 | 
			
		||||
 
 | 
			
		||||
		Reference in New Issue
	
	Block a user