llama.cpp
2f34b865
- cuda : fix LLAMA_CUDA_F16 build (#6298)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Previous Change (CTRL+↑)
Next Change (CTRL+↓)
Expand Context Lines
Collapse Context Lines
Hide Minimap (CTRL+M)
Commit
1 year ago
cuda : fix LLAMA_CUDA_F16 build (#6298)
References
#6298 - cuda : fix LLAMA_CUDA_F16 build
Author
slaren
Parents
ae1f211c
Files
1
ggml-cuda
dmmv.cu
Loading