llama.cpp
clip: enable gpu backend
#4205
Merged

clip: enable gpu backend #4205

ggerganov merged 10 commits into ggml-org:master from FSSRepo:master
FSSRepo
FSSRepo clip: enable CUDA backend
08e7afac
FSSRepo add missing kernels
f4f0b06a
FSSRepo Merge branch 'ggerganov:master' into master
bc0fabfd
slaren
slaren commented on 2023-11-24
slaren
FSSRepo
FSSRepo add enough padding for alignment
b13911f0
FSSRepo Merge branch 'master' of https://github.com/FSSRepo/llama.cpp
62444fc8
cmp-nct
FSSRepo
FSSRepo
FSSRepo FSSRepo closed this 2 years ago
FSSRepo
cmp-nct
FSSRepo
FSSRepo FSSRepo reopened this 2 years ago
FSSRepo FSSRepo changed the title clip: enable cuda backend clip: enable gpu backend 2 years ago
ggerganov
FSSRepo
FSSRepo Merge branch 'master' of https://github.com/ggerganov/llama.cpp
ffdb10d2
FSSRepo remove ggml_repeat of clip.cpp
a3862783
y10ab1
FSSRepo
ggerganov
FSSRepo
y10ab1
FSSRepo Merge branch 'master' of https://github.com/ggerganov/llama.cpp
a52154d3
FSSRepo add metal backend
2cf4f37e
ggerganov llava : fixes
44c5f7b1
slaren
slaren commented on 2023-12-29
ggerganov
ggerganov ggerganov merged ce18d727 into master 2 years ago
cmp-nct

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone