Radoslav Gerganov 
							
						 
					 
					
						
						
							
						
						ab6ab8f809 
					 
					
						
						
							
							rpc : send hash when tensor data is above some fixed threshold ( #12496 )  
						
						... 
						
						
						
						* rpc : send hash when tensor data is above some fixed threshold
ref #10095 
* rpc : put cache under $HOME/.cache/llama.cpp
* try to fix win32 build
* another try to fix win32 build
* remove llama as dependency 
						
						
					 
					
						2025-03-28 08:18:04 +02:00 
						 
				 
			
				
					
						
							
							
								Diego Devesa 
							
						 
					 
					
						
						
							
						
						ae8de6d50a 
					 
					
						
						
							
							ggml : build backends as libraries ( #10256 )  
						
						... 
						
						
						
						* ggml : build backends as libraries
---------
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
Co-authored-by: R0CKSTAR <xiaodong.ye@mthreads.com > 
						
						
					 
					
						2024-11-14 18:04:35 +01:00 
						 
				 
			
				
					
						
							
							
								Diego Devesa 
							
						 
					 
					
						
						
							
						
						0e9f760eb1 
					 
					
						
						
							
							rpc : add backend registry / device interfaces ( #9812 )  
						
						... 
						
						
						
						* rpc : add backend registry / device interfaces
* llama : add llama_supports_rpc API
* ggml_backend_rpc_start_rpc_server -> ggml_backend_rpc_start_server 
						
						
					 
					
						2024-10-10 20:14:55 +02:00 
						 
				 
			
				
					
						
							
							
								Diego Devesa 
							
						 
					 
					
						
						
							
						
						c83ad6d01e 
					 
					
						
						
							
							ggml-backend : add device and backend reg interfaces ( #9707 )  
						
						... 
						
						
						
						Co-authored-by: Johannes Gäßler <johannesg@5d6.de > 
						
						
					 
					
						2024-10-03 01:49:47 +02:00 
						 
				 
			
				
					
						
							
							
								Georgi Gerganov 
							
						 
					 
					
						
						
							
						
						f3f65429c4 
					 
					
						
						
							
							llama : reorganize source code + improve CMake ( #8006 )  
						
						... 
						
						
						
						* scripts : update sync [no ci]
* files : relocate [no ci]
* ci : disable kompute build [no ci]
* cmake : fixes [no ci]
* server : fix mingw build
ggml-ci
* cmake : minor [no ci]
* cmake : link math library [no ci]
* cmake : build normal ggml library (not object library) [no ci]
* cmake : fix kompute build
ggml-ci
* make,cmake : fix LLAMA_CUDA + replace GGML_CDEF_PRIVATE
ggml-ci
* move public backend headers to the public include directory (#8122 )
* move public backend headers to the public include directory
* nix test
* spm : fix metal header
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* scripts : fix sync paths [no ci]
* scripts : sync ggml-blas.h [no ci]
---------
Co-authored-by: slaren <slarengh@gmail.com > 
						
						
					 
					
						2024-06-26 18:33:02 +03:00