llama.cpp
9150f8fe
- Do not include arm_neon.h when compiling CUDA code (ggml/1028)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
Do not include arm_neon.h when compiling CUDA code (ggml/1028)
Author
frankier
Committer
ggerganov
Parents
c31ed2ab
Loading