llama.cpp
7a6e91ad - CUDA: replace GGML_CUDA_F16 with CUDA arch checks (#15433)

Commit
120 days ago
CUDA: replace GGML_CUDA_F16 with CUDA arch checks (#15433)
Parents
Loading