llama.cpp
07a0c4ba
- Revert "ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON (#18413)" (#18426)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
4 days ago
Revert "ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON (#18413)" (#18426)
References
#18426 - Revert "ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIā¦
Author
am17an
Parents
60f17f56
Loading