llama.cpp
CUDA: fix padding logic for FP16/FP32
#8884
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
1
Changes
View On
GitHub
CUDA: fix padding logic for FP16/FP32
#8884
JohannesGaessler
merged 1 commit into
ggml-org:master
from
JohannesGaessler:cuda-fix-f32-padding
CUDA: fix padding logic for FP16/FP32
57baa452
slaren
approved these changes on 2024-08-06
forworldm
approved these changes on 2024-08-06
JohannesGaessler
merged
641f5dd2
into master
1 year ago
Login to write a write a comment.
Login via GitHub
Reviewers
slaren
forworldm
Assignees
No one assigned
Labels
None yet
Milestone
No milestone
Login to write a write a comment.
Login via GitHub