llama.cpp
CUDA: implement __hmax and __hmax2 for CUDA < 11.7
#7019
Merged

CUDA: implement __hmax and __hmax2 for CUDA < 11.7 #7019

JohannesGaessler
LostRuins
LostRuins
JohannesGaessler JohannesGaessler force pushed from 0ea9cd2e to bc8ac98d 1 year ago
JohannesGaessler JohannesGaessler force pushed from bc8ac98d to 24ea3c6d 1 year ago
JohannesGaessler
LostRuins
LostRuins
JohannesGaessler CUDA: CUDART < 11.7 workaround for __hmax, __hmax2
859734ee
JohannesGaessler JohannesGaessler force pushed from 24ea3c6d to 859734ee 1 year ago
JohannesGaessler
slaren
slaren commented on 2024-05-01
github-actions
slaren
slaren approved these changes on 2024-05-01
JohannesGaessler JohannesGaessler merged 1613ef8d into master 1 year ago
JohannesGaessler
LostRuins
LostRuins

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone