llama.cpp
cb79c2e7
- ggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
244 days ago
ggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187) fix #1186
References
#12881 - sync : ggml
Author
cmdr2
Committer
ggerganov
Parents
fe92821e
Loading