mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-31 08:51:55 +00:00 
			
		
		
		
	 b804b1ef77
			
		
	
	b804b1ef77
	
	
	
		
			
			* gguf-debug: Example how to use ggml callback for debugging * gguf-debug: no mutex, verify type, fix stride. * llama: cv eval: move cb eval field in common gpt_params * ggml_debug: use common gpt_params to pass cb eval. Fix get tensor SIGV random. * ggml_debug: ci: add tests * ggml_debug: EOL in CMakeLists.txt * ggml_debug: Remove unused param n_batch, no batching here * ggml_debug: fix trailing spaces * ggml_debug: fix trailing spaces * common: fix cb_eval and user data not initialized * ci: build revert label * ggml_debug: add main test label * doc: add a model: add a link to ggml-debug * ggml-debug: add to make toolchain * ggml-debug: tests add the main label * ggml-debug: ci add test curl label * common: allow the warmup to be disabled in llama_init_from_gpt_params * ci: add curl test * ggml-debug: better tensor type support * gitignore : ggml-debug * ggml-debug: printing also the sum of each tensor * ggml-debug: remove block size * eval-callback: renamed from ggml-debug * eval-callback: fix make toolchain --------- Co-authored-by: slaren <slarengh@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
llama.cpp/examples/imatrix
Compute an importance matrix for a model and given text dataset. Can be used during quantization to enchance the quality of the quantum models. More information is available here: https://github.com/ggerganov/llama.cpp/pull/4861
Usage
./imatrix -m <some_fp_model> -f <some_training_data> [-o <output_file>] [--verbosity <verbosity_level>]
        [-ofreq num_chunks] [-ow <0 or 1>] [other common params]
Here -m with a model name and -f with a file containing training data (such as e.g. wiki.train.raw) are mandatory.
The parameters in square brackets are optional and have the following meaning:
- -o(or- --output-file) specifies the name of the file where the computed data will be stored. If missing- imatrix.datis used.
- --verbosityspecifies the verbosity level. If set to- 0, no output other than the perplexity of the processed chunks will be generated. If set to- 1, each time the results are saved a message is written to- stderr. If- >=2, a message is output each time data is collected for any tensor. Default verbosity level is- 1.
- -ofreq(or- --output-frequency) specifies how often the so far computed result is saved to disk. Default is 10 (i.e., every 10 chunks)
- -ow(or- --output-weight) specifies if data will be collected for the- output.weighttensor. My experience is that it is better to not utilize the importance matrix when quantizing- output.weight, so this is set to- falseby default.
For faster computation, make sure to use GPU offloading via the -ngl argument
Example
LLAMA_CUDA=1 make -j
# generate importance matrix (imatrix.dat)
./imatrix -m ggml-model-f16.gguf -f train-data.txt -ngl 99
# use the imatrix to perform a Q4_K_M quantization
./quantize --imatrix imatrix.dat ggml-model-f16.gguf ./ggml-model-q4_k_m.gguf q4_k_m