llama.cpp
server : enable multi-modal prompt caching
#19877
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
1
Changes
View On
GitHub
server : enable multi-modal prompt caching
#19877
ggerganov
merged 1 commit into
master
from
gg/server-mtmd-prompt-cache
ggerganov
requested a review
from
ngxson
8 days ago
github-actions
added
examples
github-actions
added
server
ngxson
approved these changes on 2026-02-25
Base automatically changed from
pr/19747-alt
to
master
8 days ago
server : enable multi-modal prompt caching
dc4d4471
ggerganov
force pushed
from
f94fc713
to
dc4d4471
8 days ago
ggerganov
merged
f20469d9
into master
8 days ago
ggerganov
deleted the gg/server-mtmd-prompt-cache branch
8 days ago
Login to write a write a comment.
Login via GitHub
Reviewers
ngxson
Assignees
No one assigned
Labels
examples
server
Milestone
No milestone
Login to write a write a comment.
Login via GitHub