llama.cpp
4fd59e84
- ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON (#18413)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
4 days ago
ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON (#18413)
References
#18413 - ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON
Author
QDelta
Parents
08566977
Loading