llama.cpp
ggml : fix FA mask dim 2 and 3
#14505
Merged

ggml : fix FA mask dim 2 and 3 #14505

ggerganov merged 3 commits into master from gg/fa-fix-dim-2
ggerganov
ggerganov ggml : fix FA mask dim 2 and 3
6036177c
github-actions github-actions added testing
github-actions github-actions added ggml
github-actions github-actions added Apple Metal
github-actions github-actions added Nvidia GPU
ggerganov
ggerganov backends : unsupport batched FA in CUDA and Vulkan
89ee2f1c
ggerganov ggerganov force pushed to 89ee2f1c 174 days ago
github-actions github-actions added Vulkan
ggerganov ggerganov marked this pull request as ready for review 174 days ago
ggerganov
ggerganov vulkan : disable FA for mask->ne[2] != 1
b1b22ae1
ggerganov ggerganov force pushed to b1b22ae1 174 days ago
ggerganov ggerganov merged 9067487c into master 174 days ago
ggerganov ggerganov deleted the gg/fa-fix-dim-2 branch 174 days ago
JohannesGaessler
ggerganov
jeffbolznv
ggerganov
jeffbolznv
ggerganov
jeffbolznv
ggerganov
ggerganov commented on 2025-07-10

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone