llama.cpp
1e3bc523
- ggml : support CUDA's half type for aarch64(#1455) (#2670)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
2 years ago
ggml : support CUDA's half type for aarch64(#1455) (#2670) * ggml: support CUDA's half type for aarch64(#1455) support CUDA's half type for aarch64 in ggml_fp16_t definition * ggml: use __CUDACC__ to recognise nvcc compiler
References
#2670 - ggml: support CUDA's half type for aarch64(#1455)
Author
KyL0N
Parents
14b1d7e6
Loading