whisper.cpp
sync : ggml + llama.cpp
#2455
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
15
Changes
View On
GitHub
sync : ggml + llama.cpp
#2455
ggerganov
merged 15 commits into
master
from
sync
scripts : sync ggml-backend.cpp
f97be1bc
ggml: refactor cross entropy loss CPU impl. (ggml/976)
1a5d63d6
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
8fdc942e
vulkan : do not use tensor->extra (llama/9407)
d6f1bd9a
Initial cmake support of SYCL for AMD GPUs (llama/9658)
1163865a
ggml-backend : add device and backend reg interfaces (llama/9707)
3d86dc43
Fixed dequant precision issues in Q4_1 and Q5_1 (llama/9711)
84cc6c81
ggml-backend : add device and backend reg interfaces (llama/9707)
cd78b885
ggml : fixes after sync (ggml/983)
66225ab8
ggml : fix typo in example usage ggml_gallocr_new (ggml/984)
98a54085
whisper : adapt to latest ggml (skip) (#0)
f2874373
whisper : revert mel-related changes (#0)
d9a6ba3e
metal : zero-init buffer contexts (#0)
7a2784ee
objc : fix build
0c0247e1
whisper : zero-out the KV cache upon clear (#2445)
47de4115
ggerganov
merged
847f94fd
into master
1 year ago
ggerganov
deleted the sync branch
1 year ago
Login to write a write a comment.
Login via GitHub
Reviewers
No reviews
Assignees
No one assigned
Labels
None yet
Milestone
No milestone
Login to write a write a comment.
Login via GitHub