llama.cpp
f77261a7
- ggml: bypass code incompatible with CUDA < 11.1 (whisper/2020)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
ggml: bypass code incompatible with CUDA < 11.1 (whisper/2020) `cudaHostRegisterReadOnly` parameter was only introduced in CUDA 11.1 See this issue for more details: https://github.com/ggerganov/examples/whisper/whisper.cpp/issues/2007
Author
primenko-v
Committer
ggerganov
Parents
43e8995e
Loading