llama.cpp
ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON
#18413
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
1
Changes
View On
GitHub
ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON
#18413
am17an
merged 1 commit into
ggml-org:master
from
QDelta:master
ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON
16cbfa29
github-actions
added
Nvidia GPU
github-actions
added
ggml
JohannesGaessler
approved these changes on 2025-12-27
am17an
merged
4fd59e84
into master
5 days ago
Login to write a write a comment.
Login via GitHub
Reviewers
JohannesGaessler
Assignees
No one assigned
Labels
Nvidia GPU
ggml
Milestone
No milestone
Login to write a write a comment.
Login via GitHub