mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-30 08:42:00 +00:00 
			
		
		
		
	use weights_only in conversion script (#32)
this restricts malicious weights from executing arbitrary code by restricting the unpickler to only loading tensors, primitive types, and dictionaries
This commit is contained in:
		| @@ -86,7 +86,7 @@ for p in range(n_parts): | ||||
|     if (p > 0): | ||||
|         fname_out = sys.argv[1] + "/ggml-model-" + ftype_str[ftype] + ".bin" + "." + str(p) | ||||
|  | ||||
|     model = torch.load(fname_model, map_location="cpu") | ||||
|     model = torch.load(fname_model, map_location="cpu", weights_only=True) | ||||
|  | ||||
|     fout = open(fname_out, "wb") | ||||
|  | ||||
|   | ||||
		Reference in New Issue
	
	Block a user
	 deepdiffuser
					deepdiffuser