llama.cpp
641f5dd2
- CUDA: fix padding logic for FP16/FP32 (#8884)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
CUDA: fix padding logic for FP16/FP32 (#8884)
References
#8884 - CUDA: fix padding logic for FP16/FP32
Author
JohannesGaessler
Parents
5f4dcb1e
Loading