llama.cpp
f7d278fa
- ggml : revert CUDA broadcast changes from #2183 (#2191)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
2 years ago
ggml : revert CUDA broadcast changes from #2183 (#2191)
References
#2191 - ggml : revert CUDA broadcast changes from #2183
Author
ggerganov
Parents
20d7740a
Loading