llama.cpp
9fcb29f2
- ggml: allow casting between f32 and i32 (#15783)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
2 days ago
ggml: allow casting between f32 and i32 (#15783) * ggml: allow casting between f32 and i32 * fix cuda * add vulkan * fix CPU non-cont * add non-cont test case * add note * extend test number range * correct note * add cont version for vulkan
References
#15783 - ggml: allow casting between f32 and i32
Author
ngxson
Parents
5ef22d28
Loading