Add `ggml_roll` (ggml/1274)
24db0a71
ggml: Add Apple support for GGML_CPU_ALL_VARIANTS (llama/14258)
3f3bcea0
ggml-cpu: fix uncaught underscore terminators (llama/14023)
182e6992
ggml-cpu: reduce asm calls for hsum (llama/14037)
b3fce03b
metal : add mean kernel (llama/14267)
a642a95b
Vulkan: Set device max size for host memory to avoid OOM warning and …
83e5e8ce
llamafile : support s390x SIMD instruction set (llama/14273)
7c8169a1
sycl: Cleanup codepaths in Get Rows in sycl backend (llama/14215)
30592871
build : suppress gcc15 compile warnings (llama/14261)
172d7746
ggml-cpu : remove unnecesary arm feature detection (llama/14281)
9f9df26c
CUDA: add conv_2d_dw (llama/14265)
800d1570
ggml: Update KleidiAI to v1.9.0 (llama/14277)
32adef8d
ggml : fix repack work size for mul_mat_id (llama/14292)
bbfcd43d
cuda : synchronize graph capture and cublas handle destruction (llama…
a0c715cb
Implement GGML_CPU_ALL_VARIANTS for PowerPC (llama/14286)
c07464ba
sycl: add usage of enqueue_functions extension (llama/14244)
3026343a
CUDA: add conv_2d_transpose (llama/14287)
c3d50c23
sync : ggml
faae28d5
talk-llama : sync llama.cpp
e7d5fae3
danbev
approved these changes
on 2025-06-21
ggerganov
merged
e6c10cf3
into master 205 days ago
ggerganov
deleted the sync-ggml-25-06-20 branch 205 days ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub