llama.cpp
CUDA: implement __hmax and __hmax2 for CUDA < 11.7
#7019
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
1
Changes
View On
GitHub
Commits
CUDA: CUDART < 11.7 workaround for __hmax, __hmax2
JohannesGaessler
committed
1 year ago
Loading