llama.cpp
be55695e
- ggml-backend : fix async copy from CPU (#8897)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
ggml-backend : fix async copy from CPU (#8897) * ggml-backend : fix async copy from CPU * cuda : more reliable async copy, fix stream used when the devices are the same
References
#8897 - ggml-backend : fix async copy from CPU
Author
slaren
Parents
0478174d
Loading