llama.cpp
ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON
#18413
Merged

ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON #18413

am17an merged 1 commit into ggml-org:master from QDelta:master
QDelta
QDelta ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON
16cbfa29
github-actions github-actions added Nvidia GPU
github-actions github-actions added ggml
JohannesGaessler
JohannesGaessler approved these changes on 2025-12-27
am17an am17an merged 4fd59e84 into master 5 days ago

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone