llama.cpp
9150f8fe - Do not include arm_neon.h when compiling CUDA code (ggml/1028)

Commit
1 year ago
Do not include arm_neon.h when compiling CUDA code (ggml/1028)
Author
Committer
Parents
Loading