llama.cpp
f54a4ba1 - Support pure float16 add/sub/mul/div operations in the CUDA (and CPU) backend (ggml/1121)

Commit
298 days ago
Support pure float16 add/sub/mul/div operations in the CUDA (and CPU) backend (ggml/1121) * Support float16-to-float16 add/sub/mul/div operations in the CUDA backend * Add fp16 support for add/sub/mul/div on the CPU backend * Add test cases for fp16 add/sub/mul/div
Author
Committer
Parents
Loading