llama.cpp
4633d93a
- ggml : add abort_callback for cpu backend (ggml/725)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
2 years ago
ggml : add abort_callback for cpu backend (ggml/725) * a way to use abort_callback with the cpu backend * whisper update
Author
Xarbirus
Committer
ggerganov
Parents
4b7b38be
Loading