llama.cpp
ggml-backend : add GGML_BACKEND_DEVICE_TYPE_IGPU device type
#15797
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
6
Changes
View On
GitHub
ggml-backend : add GGML_BACKEND_DEVICE_TYPE_IGPU device type
#15797
slaren
merged 6 commits into
master
from
sl/ggml-backend-dev-ids-ext
ggml-backend : add GGML_BACKEND_DEVICE_TYPE_IGPU device type
3a1c478a
slaren
requested a review
from
0cc4m
9 days ago
fix ident
282b2d33
fix llama-bench
30eb9e83
extend ggml_backend_dev_props documentation
c6746f2e
github-actions
added
Nvidia GPU
github-actions
added
Vulkan
github-actions
added
examples
github-actions
added
ggml
cuda : format device_id from cudaDeviceProp instead of cudaDeviceGetP…
5dc82f80
0cc4m
approved these changes on 2025-09-11
better duplicate gpu message [no ci]
cbe8737a
slaren
merged
360d6533
into master
2 days ago
slaren
deleted the sl/ggml-backend-dev-ids-ext branch
2 days ago
Login to write a write a comment.
Login via GitHub
Reviewers
0cc4m
Assignees
No one assigned
Labels
Nvidia GPU
Vulkan
examples
ggml
Milestone
No milestone
Login to write a write a comment.
Login via GitHub