uvos
b0493156fa
HIP: disable sync warp shuffel operators from clr amd_warp_sync_functions.h ( #15273 )
2025-08-12 22:15:12 +02:00
Aman Gupta
efe3a90996
CUDA cmake: add -lineinfo for easier debug ( #15260 )
2025-08-12 17:21:45 +08:00
R0CKSTAR
25ff6f7659
musa: fix failures in test-backend-ops for mul_mat_id op ( #15236 )
...
* musa: fix failures in test-backend-ops for mul_mat_id op
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* Address review comments
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
---------
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
2025-08-12 10:02:51 +08:00
David Zhao
79c1160b07
cuda: refactored ssm_scan and use CUB ( #13291 )
...
* cuda: refactored ssm_scan to use CUB
* fixed compilation error when when not using CUB
* assign L to constant and use size_t instead of int
* deduplicated functions
* change min blocks per mp to 1
* Use cub load and store warp transpose
* suppress clang warning
2025-08-09 20:29:43 +02:00
Aman Gupta
34c9d765bf
CUDA: add attention sinks for tile and wmma ( #15178 )
...
* CUDA: add attention sinks for tile and wmma
* Review: formatting changes + remove syncthreads from tile + remove warp_reduce_max from wmma
2025-08-09 20:00:24 +08:00
AN Long
cd6983d56d
ggml : fix field name when new ggml_backend ( #14944 )
2025-08-08 14:37:22 +02:00
Johannes Gäßler
1425f587a8
CUDA: attention sinks for mma FlashAttention ( #15157 )
2025-08-08 08:19:58 +02:00
Johannes Gäßler
1d72c84188
CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 ( #15131 )
...
* CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16
2025-08-07 10:53:21 +02:00
Georgi Gerganov
fd1234cb46
llama : add gpt-oss ( #15091 )
...
* oai moe
* compat with new checkpoint
* add attn sink impl
* add rope scaling yarn
* logits match with latest transformers code
* wip chat template
* rm trailing space
* use ggml_scale_bias
* rm redundant is_swa_all
* convert interleaved gate_up
* graph : fix activation function to match reference (#7 )
* vocab : handle o200k_harmony special tokens
* ggml : add attention sinks support (#1 )
* llama : add attn sinks
* ggml : add attn sinks
* cuda : add attn sinks
* vulkan : add support for sinks in softmax
remove unnecessary return
* ggml : add fused swiglu_oai op (#11 )
* ggml : add fused swiglu_oai op
* Update ggml/src/ggml-cpu/ops.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* update CUDA impl
* cont : metal impl
* add vulkan impl
* test-backend-ops : more test cases, clean up
* llama : remove unfused impl
* remove extra lines
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
---------
Co-authored-by: slaren <slarengh@gmail.com >
* repack mxfp4 upon conversion
* clean up a bit
* enable thinking
* add quick hack to render only some special tokens
* fix bf16 conversion
* remove vocab hack
* webui ok
* support chat parsing for gpt-oss
* fix webui
* direct mapping mxfp4, FINALLY
* force using mxfp4
* properly use lazy tensor
* ggml : add mxfp4
ggml : use e8m0 conversion instead of powf
Co-authored-by: Diego Devesa <slarengh@gmail.com >
change kvalues_mxfp4 table to match e2m1 (#6 )
metal : remove quantization for now (not used)
cuda : fix disabled CUDA graphs due to ffn moe bias
vulkan : add support for mxfp4
cont : add cm2 dequant
* ggml : add ggml_add_id (#13 )
* ggml : add ggml_add_id
* add cuda impl
* llama : add weight support check for add_id
* perf opt
* add vulkan impl
* rename cuda files
* add metal impl
* allow in-place ggml_add_id
* llama : keep biases on CPU with --cpu-moe
* llama : fix compile error
ggml-ci
* cuda : add fallback for __nv_cvt_e8m0_to_bf16raw
ggml-ci
* cleanup
ggml-ci
* sycl : fix supports_op for MXFP4
ggml-ci
* fix Unknown reasoning format
* ggml-cpu : fix AVX build
ggml-ci
* fix hip build
ggml-ci
* cuda : add mxfp4 dequantization support for cuBLAS
ggml-ci
* ggml-cpu : fix mxfp4 fallback definitions for some architectures
ggml-ci
* cuda : fix version required for __nv_cvt_e8m0_to_bf16raw
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co >
Co-authored-by: slaren <slarengh@gmail.com >
2025-08-05 22:10:36 +03:00
Johannes Gäßler
03d4698218
CUDA: use mma FA kernel for gqa > 4 on RTX 4000 ( #15035 )
2025-08-02 16:37:08 +02:00
leejet
3303c19b16
cuda: make im2col a little faster ( #15025 )
2025-08-02 17:15:36 +03:00
Georgi Gerganov
15e92fd337
cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1 ( #15038 )
...
* cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1
ggml-ci
* cont : fix cont types
ggml-ci
* cont : adopt variable names and comment from the other branch
2025-08-02 17:13:05 +03:00
Johannes Gäßler
9c35706b98
CUDA: fix MMQ nwarps for AMD with warp_size==32 ( #15014 )
2025-08-01 20:47:32 +02:00
uvos
ad4a700117
HIP: enable mfma mmq on gfx908 and gfx90a for select datatypes and shapes ( #14949 )
2025-07-30 17:38:06 +02:00
Johannes Gäßler
92b8810ec7
CUDA: skip masked KV slices for all FA kernels ( #14924 )
2025-07-30 15:46:13 +02:00
uvos
aa79524c51
HIP: remove the use of __HIP_PLATFORM_AMD__, explicitly support only AMD targets ( #14945 )
2025-07-29 20:23:04 +02:00
uvos
b77d11179d
HIP: add GGML_HIP_MMQ_MFMA option to allow disableing the MFMA path. ( #14930 )
...
This is useful for testing for regressions on GCN with CDNA hardware.
With GGML_HIP_MMQ_MFMA=Off and GGML_CUDA_FORCE_MMQ=On we can conveniently test the GCN code path on CDNA. As CDNA is just GCN renamed with MFMA added and limited use ACC registers, this provides a good alternative for regression testing when GCN hardware is not available.
2025-07-29 17:44:30 +02:00
uvos
c7aa1364fd
HIP: Ignore unsupported unroll transformation in fattn-vec ( #14931 )
...
llvm with the amdgcn target dose not support unrolling loops with conditional break statements, when those statements can not be resolved at compile time. Similar to other places in GGML lets simply ignore this warning.
2025-07-29 17:43:43 +02:00
Sigbjørn Skjæret
138b288b59
cuda : add softcap fusion ( #14907 )
2025-07-29 14:22:03 +02:00
Aman Gupta
0a5036bee9
CUDA: add roll ( #14919 )
...
* CUDA: add roll
* Make everything const, use __restrict__
2025-07-29 14:45:18 +08:00
Johannes Gäßler
946b1f6859
CUDA: fix pointer incrementation in FA ( #14916 )
2025-07-28 14:30:22 +02:00
deepsek
66906cd82a
HIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 ( #14624 )
...
This commit adds support for MFMA instructions to MMQ. CDNA1/GFX908 CDNA2/GFX90a and CDNA3/GFX942 are supported by the MFMA-enabled code path added by this commit. The code path and stream-k is only enabled on CDNA3 for now as it fails to outperform blas in all cases on the other devices.
Blas is currently only consistently outperformed on CDNA3 due to issues in the amd-provided blas libraries.
This commit also improves the awareness of MMQ towards different warp sizes and as a side effect improves the performance of all quant formats besides q4_0 and q4_1, which regress slightly, on GCN gpus.
2025-07-27 00:28:14 +02:00
R0CKSTAR
9b8f3c6c77
musa: fix build warnings (unused variable) ( #14869 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
2025-07-26 10:36:02 +08:00
R0CKSTAR
3f4fc97f1d
musa: upgrade musa sdk to rc4.2.0 ( #14498 )
...
* musa: apply mublas API changes
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* musa: update musa version to 4.2.0
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* musa: restore MUSA graph settings in CMakeLists.txt
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* musa: disable mudnnMemcpyAsync by default
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* musa: switch back to non-mudnn images
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* minor changes
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
* musa: restore rc in docker image tag
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
---------
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
2025-07-24 20:05:37 +01:00
Johannes Gäßler
a86f52b285
CUDA: fix overflow in FA, tune performance ( #14840 )
2025-07-23 21:43:25 +02:00
Johannes Gäßler
b284197df4
CUDA: fix compilation with GGML_CUDA_F16 ( #14837 )
2025-07-23 18:22:30 +02:00
Johannes Gäßler
07a19e27a2
CUDA: fix quantized KV cache + multiple sequences ( #14822 )
...
* CUDA: fix quantized KV cache + multiple sequences
* Update ggml/src/ggml-cuda/fattn-common.cuh
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
2025-07-23 14:08:09 +03:00
Aman Gupta
8c988fa41d
CUDA: add fused rms norm ( #14800 )
2025-07-23 09:25:42 +08:00
Sigbjørn Skjæret
e28c0b80c2
cuda : implement bf16 cpy ops and enable bf16 cont ( #14763 )
...
* implement bf16 cpy ops and enable bf16 cont
* deduplicate copy functions
* deduplicate checks
2025-07-22 12:33:10 +02:00
R0CKSTAR
48b86c4fdb
cuda: remove linking to cublasLt ( #14790 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
2025-07-22 07:45:26 +08:00
Jeff Bolz
c2e058f1b4
vulkan/cuda: Fix im2col when KW!=KH ( #14789 )
...
The tid is decomposed into "ow + ky*OW + kx*OW*KH". Change "ksize" to match.
2025-07-21 13:35:40 +02:00
Oliver Simons
021cc28bef
cuda : Fix Gemma3n not executed as CUDA_GRAPH on NVGPUs ( #14741 )
...
* Fix Gemma3n not executed as CUDA_GRAPH on NVGPUs
Gemma3n uses Matrix-Matrix addition as part of their input processing,
wrongly triggering CUDA_GRAPH disablement on NVGPUs even when batch-size
of 1 is used.
* Exclude `project_per_layer_input` by matching node names
This ensures that all other graphs which don't exhibit this pattern do
not have their behavior changed.
* Revert unnecessary formatting changes
2025-07-18 04:35:32 -07:00
Aman Gupta
f9a31eea06
CUDA: set_rows + cpy.cu refactor ( #14712 )
2025-07-18 14:54:18 +08:00
Georgi Gerganov
225e7a1438
llama : add high-throughput mode ( #14363 )
...
* kv-cache : prepare K/V buffers for separation
ggml-ci
* batched-bench : fix oob write
ggml-ci
* llama : add "virtual sequences"
ggml-ci
* llama : use "stream" vs "virtual sequence"
ggml-ci
* graph : fix stream splitting when KV cache is not used
ggml-ci
* kv-cache : add multi-stream save/load support
ggml-ci
* llama : add "--attn-streams" flag
ggml-ci
* kv-cache : fix handling when find_slot fails
ggml-ci
* kv-cache : restore find_slot impl
ggml-ci
* kv-cache : add comments
* kv-cache : add bounds checks for sequence id
ggml-ci
* cont : add n_seq_max to batch allocr
ggml-ci
* kv-cache : perform stream copies lazily after llama_synchronize
ggml-ci
* kv-cache : avoid throwing exceptions across the C boundary
ggml-ci
* CUDA: 4D FlashAttention support (#14628 )
* CUDA: 4D FlashAttention support
* CUDA: fix WMMA FA kernel
* llama : rename attn_streams -> kv_unified
ggml-ci
* common : rename kv_split -> kv_unified
ggml-ci
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
2025-07-16 16:35:42 +03:00
R0CKSTAR
cbc68be51d
cuda: fix build warnings in set-rows.cu (unused variable) ( #14687 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
2025-07-15 15:28:53 +08:00
Sigbjørn Skjæret
923e3ea2e3
cuda : add set rows for bf16 ( #14664 )
2025-07-13 15:01:24 +02:00
Yavor Ivanov
e743cddb60
cuda : add ELU support ( #14657 )
2025-07-13 11:33:16 +02:00
Georgi Gerganov
05fec5bd29
ggml : add build-time message to remind about ggml_set_rows ( #14661 )
...
ggml-ci
2025-07-13 10:36:33 +03:00
Aman Gupta
7de5c7cab6
CUDA: add set rows for f32 and f16 ( #14551 )
...
* CUDA: add set rows for f32 and f16
* Review: change kernel params, use strides from host
* Use 1-d kernel
* Review: use int64_t for blockDim.x, rename nb->s for clarity
2025-07-12 16:31:38 +03:00
Tarek Dakhran
f5e96b368f
model : support LiquidAI LFM2 hybrid family ( #14620 )
...
**Important**
LFM2 was [merged ](https://github.com/huggingface/transformers/pull/39340 )into transformers, but has not yet been released.
To convert into gguf, install transformers from source
```shell
pip install "transformers @ git+https://github.com/huggingface/transformers.git@main "
```
2025-07-11 20:27:01 +02:00
Slobodan Josic
756aa1020a
HIP : Add HIP 7.0+ compatibility for hipBLAS compute types ( #14634 )
2025-07-11 18:55:00 +02:00
compilade
a57d1bcb3c
cuda : support Falcon-H1 state size for SSM_SCAN ( #14602 )
2025-07-09 23:54:38 -04:00
Xuan-Son Nguyen
98bab638fb
ggml : add ggml_scale_bias ( #14417 )
...
* ggml : add ggml_scale_bias
* ggml_vec_mad1_f32
* add more simd
* add CUDA
* sycl
* vulkan
* cann (placeholder)
* opencl
* will this fix cpu?
* fix cuda
* suggestions from coderabbit
* fix cann compile error
* vDSP_vsmsa
* rm __ARM_FEATURE_SVE
* use memcpy for op params
* make code looks more consistent
* use scalar for __ARM_FEATURE_SVE
* add x param to ggml_vec_mad1_f32
2025-07-09 18:16:12 +02:00
Georgi Gerganov
4d0dcd4a06
cuda : fix rope with partial rotation and non-cont src ( #14580 )
...
* cuda : fix rope non-cont
ggml-ci
* cont : fix multi-rope + add test
ggml-ci
* sycl : try fix
ggml-ci
* cont : fix sycl + clean-up cuda
ggml-ci
2025-07-08 10:15:21 +03:00
Aman Gupta
75c91de6e9
CUDA: add bilinear interpolation for upscale ( #14563 )
2025-07-08 10:11:18 +08:00
R0CKSTAR
68155c66f0
musa: fix build warnings (unused variable) ( #14561 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
2025-07-08 07:58:30 +08:00
Aman Gupta
b9c3eefde1
CUDA: add bf16 and i32 to getrows ( #14529 )
2025-07-07 21:45:43 +08:00
Sigbjørn Skjæret
28657a8229
ggml : implement GEGLU_ERF and GEGLU_QUICK ops ( #14445 )
2025-07-03 23:07:22 +02:00
Georgi Gerganov
9067487c44
ggml : fix FA mask dim 2 and 3 ( #14505 )
...
* ggml : fix FA mask dim 2 and 3
ggml-ci
* backends : unsupport batched FA in CUDA and Vulkan
ggml-ci
* vulkan : disable FA for mask->ne[2] != 1
2025-07-03 10:46:57 +03:00
Aman Gupta
55c2646b45
CUDA: add dynamic shared mem to softmax, refactor general usage ( #14497 )
2025-07-03 07:45:11 +08:00