mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-11-04 09:32:00 +00:00 
			
		
		
		
	* llava: add requirements.txt and update README.md This commit adds a `requirements.txt` file to the `examples/llava` directory. This file contains the required Python packages to run the scripts in the `examples/llava` directory. The motivation of this to make it easier for users to run the scripts in `examples/llava`. This will avoid users from having to possibly run into missing package issues if the packages are not installed on their system. Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com> * llava: fix typo in llava-surgery.py output Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com> --------- Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
		
			
				
	
	
	
		
			1.8 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	
			1.8 KiB
		
	
	
	
	
	
	
	
LLaVA
Currently this implementation supports llava-v1.5 variants.
The pre-converted 7b and 13b models are available.
After API is confirmed, more models will be supported / uploaded.
Usage
Build with cmake or run make llava-cli to build it.
After building, run: ./llava-cli to see the usage. For example:
./llava-cli -m ../llava-v1.5-7b/ggml-model-f16.gguf --mmproj ../llava-v1.5-7b/mmproj-model-f16.gguf --image path/to/an/image.jpg
note: A lower temperature like 0.1 is recommended for better quality. add --temp 0.1 to the command to do so.
Model conversion
- Clone 
llava-v15-7bandclip-vit-large-patch14-336locally: 
git clone https://huggingface.co/liuhaotian/llava-v1.5-7b
git clone https://huggingface.co/openai/clip-vit-large-patch14-336
- Install the required Python packages:
 
pip install -r examples/llava/requirements.txt
- Use 
llava-surgery.pyto split the LLaVA model to LLaMA and multimodel projector constituents: 
python ./examples/llava/llava-surgery.py -m ../llava-v1.5-7b
- Use 
convert-image-encoder-to-gguf.pyto convert the LLaVA image encoder to GGUF: 
python ./examples/llava/convert-image-encoder-to-gguf.py -m ../clip-vit-large-patch14-336 --llava-projector ../llava-v1.5-7b/llava.projector --output-dir ../llava-v1.5-7b
- Use 
convert.pyto convert the LLaMA part of LLaVA to GGUF: 
python ./convert.py ../llava-v1.5-7b
Now both the LLaMA part and the image encoder is in the llava-v1.5-7b directory.
TODO
- Support non-CPU backend for the image encoding part.
 - Support different sampling methods.
 - Support more model variants.