mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	 64387f6e95
			
		
	
	64387f6e95
	
	
	
		
			
			* gguf-py: implement byteswapping for Q4_0 This is needed to byteswap Mistral model. Also restore original shapes after byteswapping tensors. It is not needed at the moment, but do it in case they'd be used in future. * Rework byteswapping code in gguf-py Move out details from byteswapping tensor blocks code
		
			
				
	
	
	
		
			6.8 KiB
		
	
	
	
	
		
			Executable File
		
	
	
	
	
			
		
		
	
	
			6.8 KiB
		
	
	
	
	
		
			Executable File