vllm
[feat] Enable mm caching for transformers backend
#21358
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
2
Changes
View On
GitHub
[feat] Enable mm caching for transformers backend
#21358
vllm-bot
merged 2 commits into
vllm-project:main
from
zucchini-nlp:vlm-transformers
dont ask to explicitly disable caching
ea65be5e
zucchini-nlp
requested a review
from
hmellor
144 days ago
zucchini-nlp
requested a review
from
WoosukKwon
144 days ago
zucchini-nlp
requested a review
from
robertgshaw2-redhat
144 days ago
zucchini-nlp
requested a review
from
njhill
144 days ago
zucchini-nlp
requested a review
from
ywang96
144 days ago
zucchini-nlp
requested a review
from
comaniac
144 days ago
zucchini-nlp
requested a review
from
alexm-redhat
144 days ago
mergify
added
documentation
mergify
added
v1
gemini-code-assist
commented on 2025-07-22
DarkLight1337
requested a review
from
Isotr0py
144 days ago
return hashes
a4290b07
zucchini-nlp
requested a review
from
DarkLight1337
144 days ago
mergify
added
multi-modality
Isotr0py
approved these changes on 2025-07-22
Isotr0py
enabled auto-merge (squash)
144 days ago
DarkLight1337
added this to the
v0.10.0
milestone
144 days ago
DarkLight1337
added
ready
zucchini-nlp
changed the title
[Bugfix] mm caching isn't tied to prefix caching
[feat] Enable mm caching for transformers backend
144 days ago
vllm-bot
merged
f38ee34a
into main
144 days ago
Login to write a write a comment.
Login via GitHub
Reviewers
Isotr0py
gemini-code-assist
hmellor
WoosukKwon
robertgshaw2-redhat
njhill
ywang96
comaniac
alexm-redhat
DarkLight1337
Assignees
No one assigned
Labels
documentation
ready
v1
multi-modality
Milestone
v0.10.0
Login to write a write a comment.
Login via GitHub