llama.cpp
[SYCL] support to malloc memory on Intel GPU more than 4GB
#17566
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
1
Changes
View On
GitHub
[SYCL] support to malloc memory on Intel GPU more than 4GB
#17566
ggerganov
merged 1 commit into
ggml-org:master
from
arthw:support_4gb
support to malloc memory on device more than 4GB, update the doc and …
bff9f6f4
NeoZhangJianyu
approved these changes on 2025-11-28
github-actions
added
documentation
github-actions
added
examples
github-actions
added
ggml
github-actions
added
SYCL
NeoZhangJianyu
requested a review
from
slaren
136 days ago
ggerganov
merged
7d2add51
into master
135 days ago
Login to write a write a comment.
Login via GitHub
Reviewers
NeoZhangJianyu
slaren
Assignees
No one assigned
Labels
documentation
examples
ggml
SYCL
Milestone
No milestone
Login to write a write a comment.
Login via GitHub