llama.cpp
ROCm AMD Unified Memory Architecture (UMA) handling
#4449
Merged

ROCm AMD Unified Memory Architecture (UMA) handling #4449

ggerganov merged 7 commits into ggml-org:master from ekg:rocm-amd-uma
ekg
ekg AMD ROCm: handle UMA memory VRAM expansions
d59c0b3a
ekg clarify build process for ROCm on linux with cmake
e754a83a
ekg avoid using deprecated ROCm hipMallocHost
405fc540
ekg Merge branch 'master' of https://github.com/ggerganov/llama.cpp into …
6caf33cf
ekg
ggerganov ggerganov added need feedback
ekg
AutonomicPerfectionist
Tungsten842
ekg
Tungsten842
ekg
ekg keep simplifying the change required for UMA
7ee8df3d
person4268
ekg cmake: enable UMA-compatible allocation when LLAMA_HIP_UMA=ON
1e946c54
ekg Merge branch 'master' of https://github.com/ggerganov/llama.cpp into …
87cfad3c
ekg
FNsi
ekg
rtreffer
FNsi
ekg
dariox1337
ekg
woachk
mkesper
ekg
mkesper
ggerganov
ggerganov ggerganov merged 0f630fbc into master 2 years ago
ekg
brucethemoose
ekg
dkuku
ekg
sorasoras
ekg

Login to write a write a comment.

Login via GitHub

Reviewers
No reviews
Assignees
No one assigned
Labels
Milestone