llama.cpp
2ba9adc0 - Adjust workaround for ROCWMMA_FATTN/GFX9 to only newer ROCm veresions (#19591)

Commit
39 days ago
Adjust workaround for ROCWMMA_FATTN/GFX9 to only newer ROCm veresions (#19591) Avoids issues with ROCm 6.4.4. Closes: https://github.com/ggml-org/llama.cpp/issues/19580 Fixes: 6845f7f87 ("Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (#19461)") Signed-off-by: Mario Limonciello (AMD) <superm1@kernel.org>
Author
Parents
Loading