llama.cpp
vulkan: only use M-sized matmul on Apple GPUs
#5412
Merged

vulkan: only use M-sized matmul on Apple GPUs #5412

0cc4m merged 2 commits into ggml-org:master from slp:vulkan-apple-fix
slp
0cc4m
cebtenzzre
slp
0cc4m
0cc4m
0cc4m commented on 2024-02-09
slp
0cc4m
slp vulkan: refactor guess_matmul_pipeline for vendor
f79cef94
slp vulkan: only use M-sized matmul on Apple GPUs
3a5a7e37
slp slp force pushed from 23bf5b21 to 3a5a7e37 1 year ago
slp
0cc4m
0cc4m approved these changes on 2024-02-11
0cc4m 0cc4m merged c88c74f9 into master 1 year ago

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone