llama.cpp
CUDA: fix FP16 cuBLAS GEMM
#11396
Merged

CUDA: fix FP16 cuBLAS GEMM #11396

JohannesGaessler
JohannesGaessler CUDA: fix FP16 cuBLAS GEMM
8aa0338e
github-actions github-actions added Nvidia GPU
github-actions github-actions added ggml
slaren
slaren approved these changes on 2025-01-24
IMbackK
JohannesGaessler JohannesGaessler merged c5d9effb into master 341 days ago

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone