llama.cpp
CUDA: implement __hmax and __hmax2 for CUDA < 11.7
#7019
Merged

Loading