llama.cpp
141a908a - CUDA: mix virt/real CUDA archs for GGML_NATIVE=OFF (#13135)

Commit
135 days ago
CUDA: mix virt/real CUDA archs for GGML_NATIVE=OFF (#13135)
Parents
Loading