llama.cpp
clip: enable gpu backend
#4205
Merged

Loading