llama.cpp
7a6e91ad
- CUDA: replace GGML_CUDA_F16 with CUDA arch checks (#15433)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
120 days ago
CUDA: replace GGML_CUDA_F16 with CUDA arch checks (#15433)
References
#15433 - CUDA: replace GGML_CUDA_F16 with CUDA arch checks
Author
JohannesGaessler
Parents
fec95198
Loading