mirror of
				https://github.com/ggml-org/llama.cpp.git
				synced 2025-10-29 08:41:22 +00:00 
			
		
		
		
	 2307523d32
			
		
	
	2307523d32
	
	
	
		
			
			* Vulkan loader code * Fix matmul kernel, continue implementation * Continue implementation * Vulkan memory management * Vulkan development * Matmul call * Add aligned malloc and free for VMA * Continue implementation * First matmul success * GEMM Kernel optimization * 1D Blocktiling * 2D Blocktiling * Write coalescing * Continue vulkan implementation and optimization * First FP16 attempt, disabled for now * Code abstraction, FP16 implementation, fix kernel, add FP16 to FP32 kernel * Enable device extensions properly, restore fp16 matmul op * Fix mulmat_f16 * Output FP32 in fp16 matmul shader * Fix f16_to_f32 kernel * dequant_q4_0 kernel * Add VMA library * Avoid requesting dedicated memory, VMA can decide that by itself * Add bounds checking to matmul kernels, improve implementation, fix command buffers not freed properly * add cmake commands * Add 2d write operation, profiling code * Fix 2d write * Fix queue selection for AMD RADV * Fix trailing whitespace in vk_mem_alloc.h * Add WIP warp tile mat mul shaders * Disable glslc optimization * Disable glslc optimization for CMake * Optimize warptile matmul shader, replace blocktile with it * Add split-k optimization for small matrix multiplication Use semaphores for synchronization instead of fences or waitidle Rework async write/read for synchronization * Fix validation errors, improve compatibility with AMD GPUs * Rework command buffer handling * Variable matmul kernel using specialization constants * Fix synchronization on AMD, add barriers for buffer ownership transfer, add debug flag and prints * Reuse semaphores * Handle stage flags during command buffer submission properly * Increase matmul test runs for consistent results * Fix F32 matmul * Add vectorized loading and zeropadding for matrix multiplication * Use pinned memory for f16 preprocessing * Don't force aligned matmul * Don't free before queue done * Replace VMA library with native Vulkan buffer management * Basic offloading support with mul_f32 and dmmv for q4_0 * Run glslc commands in parallel * Unroll loops in dmmv shader * Reduce usage of waitIdle * Reuse pinned allocation for f16 conversion * Handle devices with only a single queue * Fix trailing whitespace in CMakeLists.txt * Allow parallel execution of kernels, parallelize third and fourth dimension calls * Add fallback for devices only supporting one DescriptorSet per DescriptorPool * Move to graph function similar to CUDA implementation * Use F16 kernel for most things, replace q_f32 with mul_mat_q_f16 function * Add F32 dmmv shaders * Batch submissions * Add .spv to gitignore * Split off matrix vector multiplication for separate optimization * Use single command buffer for matrix vector multiplication ops * Reduce overhead of mul_f32 calls by using a single command buffer * Add submission batching to mul_f32 * Fix tests * Add missing barrier * Add further missing barrier * Add further ops * Replace vk::QueueFamilyIgnored with VK_QUEUE_FAMILY_IGNORED to support more Vulkan header versions * Remove unnecessary cblas link * Fix descriptor set pre-allocation assert * Add runtime shader compilation, start transferring shaders to this approach * Transfer remaining shaders to header and compile on runtime * Fix fp32 fallback if device doesn't support fp16, add force disable env var GGML_VULKAN_DISABLE_F16 * Add support for q4_1, q5_0, q5_1 and q8_0 * Remove unnecessary scalar layout extension * Parse graph early to pre-record command buffers * Add q6_k support * Add multi-submit for command buffers * Fix q6_k dequant shader for AMD * Fix q6_k for GPUs without fp16 support * Simplify q6_k fp16 fix * Minor fixes * Fix wg_denom of m-mulmat shaders * Add Python-based Vulkan shader generator * Replace shaderc dependency with precompiled shaders Fix python script to generate shaders * Clean up code * Fix shader generator script Windows compatibility Co-authored-by: Concedo <39025047+LostRuins@users.noreply.github.com> * Close file before deletion * Fix vulkan shader fp32 name * Add q2_k and q3_k support Add validation check to compare shader results to cpu results * Add q4_k support * Add q5_k support * Bake SPIR-V bytecode into the library instead of loading shaders from file * Switch to signal semaphores for flexibility Prepare broadcasting support for mul mat * Finish broadcasting mul mat support for GQA * Clean up unused functions Add repeat op * Add further ops, not yet enabled. Improve semaphore code * Reduce number of used semaphores by utilizing timelines more properly * Remove queue information * Reuse timeline semaphores, allow parallel operation with binary semaphores to work around nvidia driver limitations * Add Vulkan to llama-bench * Remove cblas dependency * Fix matmul k-split bug * Fix q4_k dmmv K_QUANTS_PER_ITERATION 1 shader * Add RMS Norm shader, rework op_f32 shader setup, fix matmul bug * Fix issues with float16 overflows in shaders * Fix issues with older Vulkan headers on Ubuntu 22.04 * Allow multi-op partial offloading by parsing the graph to preallocate enough between-op buffers * Implement further ops, rework op_f32 calls, fix bugs * Finish full offloading support, add last remaining ops, fix bugs, remove redundant code * Upload generated file ggml-vulkan-shaders.hpp, remove redundant shaders * Merge upstream changes, fix conflicts, adapt soft_max op * Fix Python and shader header format * Free model gpu buffers on exit * Use single queue per device to simplify code * Add matmul shader support for running multiple calculations in parallel * Switch from semaphore-synchronized multiple command buffers per op to single command buffer for multiple ops, whole graph if possible * Fix missing event cast * Replace uint64_t(-1) with UINT64_MAX, rename function for clarity * Fix warning about empty C function parameters * Fix compiler warnings * Properly implement Vulkan backend buffer handling * Fix oversized host staging buffers * Simplify barrier synchronization calls * Fix gcc warnings * Implement max_size for backend buffer types to limit the size of a single allocation * Use min of maxMemoryAllocationSize and maxBufferSize for device max allocation size * refactor multi buf * Disable unsupported ops to fix tests * Check for maintenance4 support before using it * Handle devices with only a single queue * Fix single queue logic * propagate buffer usage in multi buffers * Implement rope_neox op * Cleanup header and other files * Simplify gpu_extras by removing events and putting staging memcpys into contexts * Move queue into context Add not-yet-enabled async backend ops * Simplify context use, optimize matmul shader for warp size 64 (AMD GCN), fix split_k matmul shader optimization * Add get_max_size to SYCL backend. Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * llama : fix trailing whitespace --------- Co-authored-by: Henri Vasserman <henv@hot.ee> Co-authored-by: Concedo <39025047+LostRuins@users.noreply.github.com> Co-authored-by: slaren <slarengh@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
		
			
				
	
	
		
			213 lines
		
	
	
		
			11 KiB
		
	
	
	
		
			C
		
	
	
	
	
	
			
		
		
	
	
			213 lines
		
	
	
		
			11 KiB
		
	
	
	
		
			C
		
	
	
	
	
	
| #pragma once
 | |
| 
 | |
| #include "ggml.h"
 | |
| #include "ggml-alloc.h"
 | |
| 
 | |
| #ifdef  __cplusplus
 | |
| extern "C" {
 | |
| #endif
 | |
| 
 | |
|     typedef struct ggml_backend_buffer_type * ggml_backend_buffer_type_t;
 | |
|     typedef struct ggml_backend_buffer * ggml_backend_buffer_t;
 | |
|     typedef struct ggml_backend * ggml_backend_t;
 | |
|     typedef void * ggml_backend_graph_plan_t;
 | |
| 
 | |
|     //
 | |
|     // Backend buffer
 | |
|     //
 | |
| 
 | |
|     // buffer type
 | |
|     GGML_API           const char *          ggml_backend_buft_name            (ggml_backend_buffer_type_t buft);
 | |
|     GGML_API GGML_CALL ggml_backend_buffer_t ggml_backend_buft_alloc_buffer    (ggml_backend_buffer_type_t buft, size_t size);
 | |
|     GGML_API           size_t                ggml_backend_buft_get_alignment   (ggml_backend_buffer_type_t buft);
 | |
|     GGML_API           size_t                ggml_backend_buft_get_max_size    (ggml_backend_buffer_type_t buft);
 | |
|     GGML_API GGML_CALL size_t                ggml_backend_buft_get_alloc_size  (ggml_backend_buffer_type_t buft, struct ggml_tensor * tensor);
 | |
|     GGML_API           bool                  ggml_backend_buft_supports_backend(ggml_backend_buffer_type_t buft, ggml_backend_t backend);
 | |
|     GGML_API           bool                  ggml_backend_buft_is_host         (ggml_backend_buffer_type_t buft);
 | |
| 
 | |
|     // buffer
 | |
|     enum ggml_backend_buffer_usage {
 | |
|         GGML_BACKEND_BUFFER_USAGE_ANY = 0,
 | |
|         GGML_BACKEND_BUFFER_USAGE_WEIGHTS = 1,
 | |
|     };
 | |
| 
 | |
|     GGML_API           const char *               ggml_backend_buffer_name          (ggml_backend_buffer_t buffer);
 | |
|     GGML_API           void                       ggml_backend_buffer_free          (ggml_backend_buffer_t buffer);
 | |
|     GGML_API           void *                     ggml_backend_buffer_get_base      (ggml_backend_buffer_t buffer);
 | |
|     GGML_API           size_t                     ggml_backend_buffer_get_size      (ggml_backend_buffer_t buffer);
 | |
|     GGML_API GGML_CALL void                       ggml_backend_buffer_init_tensor   (ggml_backend_buffer_t buffer, struct ggml_tensor * tensor);
 | |
|     GGML_API           size_t                     ggml_backend_buffer_get_alignment (ggml_backend_buffer_t buffer);
 | |
|     GGML_API           size_t                     ggml_backend_buffer_get_max_size  (ggml_backend_buffer_t buffer);
 | |
|     GGML_API           size_t                     ggml_backend_buffer_get_alloc_size(ggml_backend_buffer_t buffer, struct ggml_tensor * tensor);
 | |
|     GGML_API           void                       ggml_backend_buffer_clear         (ggml_backend_buffer_t buffer, uint8_t value);
 | |
|     GGML_API           bool                       ggml_backend_buffer_is_host       (ggml_backend_buffer_t buffer);
 | |
|     GGML_API           void                       ggml_backend_buffer_set_usage     (ggml_backend_buffer_t buffer, enum ggml_backend_buffer_usage usage);
 | |
|     GGML_API           ggml_backend_buffer_type_t ggml_backend_buffer_get_type      (ggml_backend_buffer_t buffer);
 | |
|     GGML_API           void                       ggml_backend_buffer_reset         (ggml_backend_buffer_t buffer);
 | |
| 
 | |
|     //
 | |
|     // Backend
 | |
|     //
 | |
| 
 | |
| 
 | |
|     GGML_API const char * ggml_backend_name(ggml_backend_t backend);
 | |
|     GGML_API void         ggml_backend_free(ggml_backend_t backend);
 | |
| 
 | |
|     GGML_API ggml_backend_buffer_type_t ggml_backend_get_default_buffer_type(ggml_backend_t backend);
 | |
|     GGML_API ggml_backend_buffer_t      ggml_backend_alloc_buffer(ggml_backend_t backend, size_t size);
 | |
|     GGML_API size_t                     ggml_backend_get_alignment(ggml_backend_t backend);
 | |
|     GGML_API size_t                     ggml_backend_get_max_size(ggml_backend_t backend);
 | |
| 
 | |
|     GGML_API void ggml_backend_tensor_set_async(ggml_backend_t backend,       struct ggml_tensor * tensor, const void * data, size_t offset, size_t size);
 | |
|     GGML_API void ggml_backend_tensor_get_async(ggml_backend_t backend, const struct ggml_tensor * tensor,       void * data, size_t offset, size_t size);
 | |
| 
 | |
|     GGML_API GGML_CALL void ggml_backend_tensor_set(      struct ggml_tensor * tensor, const void * data, size_t offset, size_t size);
 | |
|     GGML_API GGML_CALL void ggml_backend_tensor_get(const struct ggml_tensor * tensor,       void * data, size_t offset, size_t size);
 | |
| 
 | |
|     GGML_API void ggml_backend_synchronize(ggml_backend_t backend);
 | |
| 
 | |
|     GGML_API ggml_backend_graph_plan_t ggml_backend_graph_plan_create (ggml_backend_t backend, struct ggml_cgraph * cgraph);
 | |
| 
 | |
|     GGML_API void ggml_backend_graph_plan_free   (ggml_backend_t backend, ggml_backend_graph_plan_t plan);
 | |
|     GGML_API void ggml_backend_graph_plan_compute(ggml_backend_t backend, ggml_backend_graph_plan_t plan);
 | |
|     GGML_API bool ggml_backend_graph_compute     (ggml_backend_t backend, struct ggml_cgraph * cgraph);
 | |
|     GGML_API bool ggml_backend_supports_op       (ggml_backend_t backend, const struct ggml_tensor * op);
 | |
| 
 | |
|     // tensor copy between different backends
 | |
|     GGML_API void ggml_backend_tensor_copy(struct ggml_tensor * src, struct ggml_tensor * dst);
 | |
|     GGML_API void ggml_backend_tensor_copy_async(ggml_backend_t backend, struct ggml_tensor * src, struct ggml_tensor * dst); // automatic fallback to sync copy
 | |
| 
 | |
|     //
 | |
|     // CPU backend
 | |
|     //
 | |
| 
 | |
|     GGML_API ggml_backend_t ggml_backend_cpu_init(void);
 | |
| 
 | |
|     GGML_API GGML_CALL bool ggml_backend_is_cpu           (ggml_backend_t backend);
 | |
|     GGML_API           void ggml_backend_cpu_set_n_threads(ggml_backend_t backend_cpu, int n_threads);
 | |
| 
 | |
|     // Create a backend buffer from an existing pointer
 | |
|     GGML_API GGML_CALL ggml_backend_buffer_t ggml_backend_cpu_buffer_from_ptr(void * ptr, size_t size);
 | |
| 
 | |
|     GGML_API GGML_CALL ggml_backend_buffer_type_t ggml_backend_cpu_buffer_type(void);
 | |
| 
 | |
| #ifdef GGML_USE_CPU_HBM
 | |
|     GGML_API ggml_backend_buffer_type_t ggml_backend_cpu_hbm_buffer_type(void);
 | |
| #endif
 | |
| 
 | |
|     //
 | |
|     // Backend registry
 | |
|     //
 | |
| 
 | |
|     // The backend registry is a registry of all the available backends, and allows initializing backends in a generic way
 | |
| 
 | |
|     GGML_API size_t                     ggml_backend_reg_get_count(void);
 | |
|     GGML_API size_t                     ggml_backend_reg_find_by_name(const char * name);
 | |
|     GGML_API ggml_backend_t             ggml_backend_reg_init_backend_from_str(const char * backend_str); // str is name[:params]
 | |
|     GGML_API const char *               ggml_backend_reg_get_name(size_t i);
 | |
|     GGML_API ggml_backend_t             ggml_backend_reg_init_backend(size_t i, const char * params); // params is backend-specific
 | |
|     GGML_API ggml_backend_buffer_type_t ggml_backend_reg_get_default_buffer_type(size_t i);
 | |
|     GGML_API ggml_backend_buffer_t      ggml_backend_reg_alloc_buffer(size_t i, size_t size);
 | |
| 
 | |
|     //
 | |
|     // Backend scheduler
 | |
|     //
 | |
| 
 | |
|     // The backend scheduler allows for multiple backends to be used together
 | |
|     // Handles compute buffer allocation, assignment of tensors to backends, and copying of tensors between backends
 | |
|     // The backends are selected based on:
 | |
|     // - the backend that supports the operation
 | |
|     // - the location of the pre-allocated tensors (e.g. the weights)
 | |
|     /*
 | |
|       Example usage:
 | |
| 
 | |
|         sched = ggml_backend_sched_new({backend_gpu, backend_gpu2, backend_cpu}, num_backends);
 | |
|         // sched is initialized with measure allocators and cannot be used until allocated with a measure graph
 | |
| 
 | |
|         // initialize buffers from a measure graph
 | |
|         measure_graph = build_graph(sched); // use the allocr to allocate inputs as needed
 | |
| 
 | |
|         // in build_graph:
 | |
|         build_graph(...) {
 | |
|             // allocating tensors in a specific backend (optional, recommended: pre-allocate inputs in a different buffer)
 | |
|             alloc_cpu = ggml_backend_sched_get_allocr(sched, backend_cpu);
 | |
|             ggml_allocr_alloc(alloc_cpu, tensor);
 | |
| 
 | |
|             // manually assigning nodes to a backend (optional, shouldn't be needed in most cases)
 | |
|             struct ggml_tensor * node = ggml_mul_mat(ctx, ...);
 | |
|             ggml_backend_sched_set_node_backend(sched, node, backend_gpu);
 | |
|         }
 | |
| 
 | |
|         // allocate backend buffers from measure graph
 | |
|         ggml_backend_sched_init_measure(sched, measure_graph);
 | |
| 
 | |
|         // the scheduler is now ready to compute graphs
 | |
| 
 | |
|         // compute
 | |
|         graph = build_graph(sched);
 | |
|         ggml_backend_sched_graph_compute(sched, graph);
 | |
|     */
 | |
| 
 | |
|     struct ggml_backend_sched;
 | |
|     typedef struct ggml_backend_sched * ggml_backend_sched_t;
 | |
| 
 | |
|     // when ask == true, the scheduler wants to know if the user wants to observe this node
 | |
|     // this allows the scheduler to batch nodes together in order to evaluate them in a single call
 | |
|     //
 | |
|     // when ask == false, the scheduler is passing the node tensor to the user for observation
 | |
|     // if the user returns false, the scheduler will cancel the graph compute
 | |
|     //
 | |
|     typedef bool (*ggml_backend_sched_eval_callback)(struct ggml_tensor * t, bool ask, void * user_data);
 | |
| 
 | |
|     // Initialize a backend scheduler
 | |
|     GGML_API ggml_backend_sched_t  ggml_backend_sched_new(ggml_backend_t * backends, ggml_backend_buffer_type_t * bufts, int n_backends, size_t graph_size);
 | |
|     GGML_API void                  ggml_backend_sched_free(ggml_backend_sched_t sched);
 | |
|     // Initialize backend buffers from a measure graph
 | |
|     GGML_API void                  ggml_backend_sched_init_measure(ggml_backend_sched_t sched, struct ggml_cgraph * measure_graph);
 | |
|     // Get the number of splits of the last graph
 | |
|     GGML_API int                   ggml_backend_sched_get_n_splits(ggml_backend_sched_t sched);
 | |
| 
 | |
|     GGML_API ggml_tallocr_t        ggml_backend_sched_get_tallocr(ggml_backend_sched_t sched, ggml_backend_t backend);
 | |
|     GGML_API ggml_backend_buffer_t ggml_backend_sched_get_buffer (ggml_backend_sched_t sched, ggml_backend_t backend);
 | |
| 
 | |
|     GGML_API void                  ggml_backend_sched_set_node_backend(ggml_backend_sched_t sched, struct ggml_tensor * node, ggml_backend_t backend);
 | |
|     GGML_API ggml_backend_t        ggml_backend_sched_get_node_backend(ggml_backend_sched_t sched, struct ggml_tensor * node);
 | |
| 
 | |
|     // Allocate and compute graph on the backend scheduler
 | |
|     GGML_API void                  ggml_backend_sched_graph_compute(ggml_backend_sched_t sched, struct ggml_cgraph * graph);
 | |
| 
 | |
|     // Reset all assignments and allocators - must be called before using the sched allocators to allocate inputs
 | |
|     GGML_API void                  ggml_backend_sched_reset(ggml_backend_sched_t sched);
 | |
| 
 | |
|     // Set a callback to be called for each resulting node during graph compute
 | |
|     GGML_API void                  ggml_backend_sched_set_eval_callback(ggml_backend_sched_t sched, ggml_backend_sched_eval_callback callback, void * user_data);
 | |
| 
 | |
|     //
 | |
|     // Utils
 | |
|     //
 | |
| 
 | |
|     struct ggml_backend_graph_copy {
 | |
|         ggml_backend_buffer_t buffer;
 | |
|         struct ggml_context * ctx_allocated;
 | |
|         struct ggml_context * ctx_unallocated;
 | |
|         struct ggml_cgraph * graph;
 | |
|     };
 | |
| 
 | |
|     // Copy a graph to a different backend
 | |
|     GGML_API struct ggml_backend_graph_copy ggml_backend_graph_copy(ggml_backend_t backend, struct ggml_cgraph * graph);
 | |
|     GGML_API void                           ggml_backend_graph_copy_free(struct ggml_backend_graph_copy copy);
 | |
| 
 | |
|     typedef bool (*GGML_CALL ggml_backend_eval_callback)(int node_index, struct ggml_tensor * t1, struct ggml_tensor * t2, void * user_data);
 | |
| 
 | |
|     // Compare the output of two backends
 | |
|     GGML_API bool ggml_backend_compare_graph_backend(ggml_backend_t backend1, ggml_backend_t backend2, struct ggml_cgraph * graph, ggml_backend_eval_callback callback, void * user_data);
 | |
| 
 | |
|     // Tensor initialization
 | |
|     GGML_API void ggml_backend_tensor_alloc(ggml_backend_buffer_t buffer, struct ggml_tensor * tensor, void * addr);
 | |
|     GGML_API void ggml_backend_view_init(ggml_backend_buffer_t buffer, struct ggml_tensor * tensor);
 | |
| 
 | |
| 
 | |
| #ifdef  __cplusplus
 | |
| }
 | |
| #endif
 |