mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-11-04 09:32:00 +00:00 
			
		
		
		
	gguf : deduplicate (#2629)
* gguf : better type names * dedup : CPU + Metal is working * ggml : fix warnings about unused results * llama.cpp : fix line feed and compiler warning * llama : fix strncpy warning + note token_to_str does not write null * llama : restore the original load/save session implementation Will migrate this to GGUF in the future * convert-llama-h5-to-gguf.py : support alt ctx param name * ggml : assert when using ggml_mul with non-F32 src1 * examples : dedup simple --------- Co-authored-by: klosax <131523366+klosax@users.noreply.github.com>
This commit is contained in:
		@@ -26,7 +26,6 @@ int main(int argc, char ** argv) {
 | 
			
		||||
    auto lparams = llama_context_default_params();
 | 
			
		||||
 | 
			
		||||
    lparams.n_ctx     = params.n_ctx;
 | 
			
		||||
    lparams.n_gqa     = params.n_gqa;
 | 
			
		||||
    lparams.seed      = params.seed;
 | 
			
		||||
    lparams.f16_kv    = params.memory_f16;
 | 
			
		||||
    lparams.use_mmap  = params.use_mmap;
 | 
			
		||||
 
 | 
			
		||||
		Reference in New Issue
	
	Block a user