llama.cpp
sync : ggml
#12104
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
11
Changes
View On
GitHub
Commits
scripts : sync-ggml-am.sh fix
ggerganov
committed
1 year ago
Support pure float16 add/sub/mul/div operations in the CUDA (and CPU) backend (ggml/1121)
ggerganov
committed
1 year ago
Told cmake to install ggml-cpp.h as a public header file. (ggml/1126)
ggerganov
committed
1 year ago
cmake : fix compile assumptions for power9/etc (whisper/2777)
ggerganov
committed
1 year ago
whisper : support GGML_BACKEND_DL (whisper/2843)
ggerganov
committed
1 year ago
cuda/cpu: Increase support for fp16 unary operations (ggml/1125)
ggerganov
committed
1 year ago
sync : ggml
ggerganov
committed
1 year ago
cuda/vulkan: specify fp32-only support for some operations in supports_op (ggml/1129)
ggerganov
committed
1 year ago
sync : ggml
ggerganov
committed
1 year ago
cuda: unary ops as float + de-duplicate (ggml/1130)
ggerganov
committed
1 year ago
sync : ggml
ggerganov
committed
1 year ago
Loading