llama.cpp
24a6734d - ggml-cpu : add check for ARM MATMUL_INT8/i8mm support (#15922)

Commit
2 days ago
ggml-cpu : add check for ARM MATMUL_INT8/i8mm support (#15922) This commit adds a check for GGML_MACHINE_SUPPORTS_i8mm when enabling MATMUL_INT8 features, ensuring that i8mm intrinsics are only used when the target hardware actually supports them. The motivation for this is to fix ggml CI build failures where the feature detection correctly identifies that i8mm is not supported, adding the +noi8mm flag, but MATMUL_INT8 preprocessor definitions are still enabled, causing the compiler to attempt to use vmmlaq_s32 intrinsics without i8mm support. Refs: https://github.com/ggml-org/ggml/actions/runs/17525174120/job/49909199499
Author
Parents
Loading