llama.cpp
CUDA: implement __hmax and __hmax2 for CUDA < 11.7
#7019
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
1
Changes
View On
GitHub
CUDA: implement __hmax and __hmax2 for CUDA < 11.7
#7019
JohannesGaessler
merged 1 commit into
ggml-org:master
from
JohannesGaessler:cuda-hmax-fix
JohannesGaessler
force pushed
from
0ea9cd2e
to
bc8ac98d
1 year ago
JohannesGaessler
force pushed
from
bc8ac98d
to
24ea3c6d
1 year ago
CUDA: CUDART < 11.7 workaround for __hmax, __hmax2
859734ee
JohannesGaessler
force pushed
from
24ea3c6d
to
859734ee
1 year ago
slaren
commented on 2024-05-01
slaren
approved these changes on 2024-05-01
JohannesGaessler
merged
1613ef8d
into master
1 year ago
Login to write a write a comment.
Login via GitHub
Reviewers
slaren
Assignees
No one assigned
Labels
None yet
Milestone
No milestone
Login to write a write a comment.
Login via GitHub