* convert : parse safetensors directly
* gguf-py : order safetensors tensors by name
Applies to both local and remote safetensors custom parsing.
This matches the behavior of the official safetensors implementation.
* convert : rename from_safetensors_meta to from_local_tensor
For consistency with from_remote_tensor
* convert : fix no-lazy dtypes from direct safetensors
* vulkan: use all device-local heaps for memory availability reporting
Co-authored-by: Giuseppe Scrivano <gscrivan@redhat.com>
* use all available heaps for iGPU memory reporting
* Allow multiple memory types per buffer request for devices with split heaps
---------
Co-authored-by: Giuseppe Scrivano <gscrivan@redhat.com>
* arg: add --cache-list argument to list cached models
* new manifest naming format
* improve naming
* Update common/arg.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* fix: correct time_ms calculation in send_partial_response
The time_ms field was incorrectly calculated. The division was happening
before the subtraction leading to incorrect values.
Before: (ggml_time_us() - slot.t_start_process_prompt / 1000) After:
(ggml_time_us() - slot.t_start_process_prompt) / 1000
* docs : document time_ms field in prompt_progress
This change combines the rms_norm+mul and rope+view+set_rows fusions to
allow fusing the whole sequence together. This comes up in Qwen3, Bailing,
and some other models.
The std::map pipeline_flash_attn_f32_f16 could be searched and inserted at the
same time, which needs to hold the lock. To be safe, hold the lock for all of
ggml_vk_load_shaders.
* bench : cache llama_context state at depth
* cont : handle failures to restore the old state
* cont : print information when the state is being reused
* kv-cache : pad the size of the small SWA cache for performance
* context : pad the total context to 256
* cont : future-proof the swa pad
* server : adjust test params to new logic
When using GCC 9 and GCC 12 on the arm64 platform of ubuntu 2004,
the command "gcc -mcpu=native -E -v -" fails to detect the correct CPU flags,
which results in compilation failures for certain extended instructions,
but the correct CPU flags can be obtained by using gcc -march.
Signed-off-by: lizhenneng <lizhenneng@kylinos.cn>
Co-authored-by: lizhenneng <lizhenneng@kylinos.cn>
* common: move download functions to download.(cpp|h)
* rm unused includes
* minor cleanup
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* metal : rework mat-mat multiplication
* metal : initial Metal4 support
* cont
* metal : detect tensor support
* cont : better ifdefs
* metal : support tensors in mul_mm_id
* metal : add env for disabling tensor API
* tests : restore
* metal : remove unused constants
* metal : fix check for bfloat tensor support
* cont : handle API incompatibilities
* cont : handle even more incompatibilities
* metal : use tensor API only on M5 and later
* support older socs where FASTRPC_GET_URI is unsupported
* added graceful fallback when FASTRPC_GET_URI call fails
* use weak symbols instead of loading libcdsprpc.so dynamically
* Add weak pragma for rpcmem_alloc2
* Remove weak declaration for rpcmem_alloc2 in ggml-hexagon.cpp
Removed weak declaration for rpcmem_alloc2.
* Enforce ndev to 1 for archs below v75
Force ndev to 1 for SoCs architectures lower than v75.
* WIP
* added a cpy kernel specific to transposed tensor which uses smem to avoid uncoalesced access; test cases also added shwoing improved memory bandwidth
* added BF16 support
* more strict check to make sure src0 is a transpose
* reformulated to handle more complicated transpose cases
* bring back 2D transpose for higher performance
* allow build on windows
* tranpose copy more shapes
* minor tweak
* final clean up
* restore some test cases
* keep only the kernel for true tranposed case; updated with review suggestions
* make CI happy
* remove headers not needed
* reduced bank conflicts for fp16 and bf16
* add missing const*
* now bank conflicts free
* use padding instead of swizzling
---------
Co-authored-by: bssrdf <bssrdf@gmail.com>
* feat(llama-gguf): Print out the tensor type in llama-gguf r
Branch: Mamba2Perf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat(off-topic): print the number of elements in tensors with llama-gguf
Branch: Mamba2SSD
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* style: valign
Branch: GGUFToolOutputs
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* Update examples/gguf/gguf.cpp
---------
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Add buffer label and enable dawn-specific toggles to turn off some checks
* Minor set_rows optimization (#4)
* updated optimization, fixed errors
* non vectorized version now dispatches one thread per element
* Simplify
* Change logic for set_rows pipelines
---------
Co-authored-by: Neha Abbas <nehaabbas@macbookpro.lan>
Co-authored-by: Neha Abbas <nehaabbas@ReeseLevines-MacBook-Pro.local>
Co-authored-by: Reese Levine <reeselevine1@gmail.com>
* Comment on dawn toggles
* Remove some comments
* Implement overlap binary operators
* Revert "Implement overlap binary operators"
This reverts commit ed710b36f5.
* Disable support for non-contiguous binary_op tensors and leave note for future support
---------
Co-authored-by: neha-ha <137219201+neha-ha@users.noreply.github.com>
Co-authored-by: Neha Abbas <nehaabbas@macbookpro.lan>
Co-authored-by: Neha Abbas <nehaabbas@ReeseLevines-MacBook-Pro.local>