llama.cpp
9a5c2a16
- cuda : switch to F16 scalars + tune warps for RTX 2060
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Hide Minimap (CTRL+M)
Commit
1 year ago
cuda : switch to F16 scalars + tune warps for RTX 2060
Author
ggerganov
Committer
ggerganov
Parents
2c04beeb
Files
2
ggml-cuda.cu
tests
test-backend-ops.cpp
Loading