llama.cpp
f7d278fa - ggml : revert CUDA broadcast changes from #2183 (#2191)

Commit
2 years ago
ggml : revert CUDA broadcast changes from #2183 (#2191)
Author
Parents
Loading