👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck
CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck
build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo
or khluu
to add you in our Buildkite org.
Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.
To run CI, PR reviewers can do one of these:
ready
label to the PR🚀
The new code looks great. Also the performance should be better. Some nit comments
What's the meaning of this num_new_tokens update here? For example, start_pos can be < num_computed_tokens, and then the result may be potentially negative?
I don't think it's possible to have start_pos < num_computed_tokens
here: This is because num_computed_tokens
are tokens already processed, which means if there were an image with start_pos < num_computed_tokens
, it should have been already processed in the previous iteration (either stored in KV cache, or cached in encoder cache).
If I understand correctly, the point of this update is that if we cannot run encoder here, then we want to stop at exactly before where the first encoder position is, to run decoder only processing for this current iteration. However, I think it is possible to have start_pos == num_computed_tokens
for a running request? (e.g, the first image token in a placeholder is exactly the first scheduled token, but the cache cannot allocate).
It's possible when prefix caching is enabled (whilst we currently don't support prefix caching for VLMs).
we want to stop at exactly before where the first encoder position is, to run decoder only processing for this current iteration.
Exactly.
Seems like a code duplication with the running case. Maybe the duplication can be avoided somehow.
+1
The code was simplified a bit. I found it difficult to further refactor, since it's only 5 lines of code, and it involves updating the local variables like scheduled_encoder_inputs
and encoder_budget
. The code looks ok to me. WDYT?
nit: A quick doc for this start/end indices computation would be helpful here.
Added some comments above to help understand the logic.
This pull request has merge conflicts that must be resolved before it can be
merged. @WoosukKwon please rebase it. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
Sorry for the very delayed review - left some comments!
FWIW - I did some mini benchmark on this branch vs V0
on 1 x A100-80G.
Command: python vllm/examples/offline_inference_vision_language.py --num-prompts 1000
V0:
1000/1000 [01:33<00:00, 10.69it/s, est. speed input: 6369.87 toks/s, output: 682.44 toks/s]
V1 with this PR and default budget & cache:
1000/1000 [01:01<00:00, 16.13it/s, est. speed input: 9614.21 toks/s, output: 1029.49 toks/s]
V1 with encoder budget and cache size = 576 (This should be more or less equivalent to V1 with previous design of VLM)
1000/1000 [01:15<00:00, 13.18it/s, est. speed input: 7856.67 toks/s, output: 841.03 toks/s]
See my other comment on when num_new_tokens
can be 0 for a running sequence.
I also fixed this (for the above decoder tokens) in the prefix caching PR. Also to clarify the semantic of num_new_tokens
:
_schedule_encoder_inputs
, num_new_tokens
would be the text tokens as well as image tokens (placeholder)._schedule_encoder_inputs
, num_new_tokens
may be the same as before if encoder budget allows; otherwise it would be reduced to only include text tokens.Is this understanding correct?
@comaniac Yes, correct. When the encoder cache or budget is insufficient, num_new_tokens
can decrease up to the point just before the encoder input (e.g., image placeholder).
@ywang96 Thanks for the review!
QQ: How did you measure the perf of V1 without this PR?
@ywang96 Thanks for the review!
QQ: How did you measure the perf of V1 without this PR?
I have updated my original review comment - PTAL!
This pull request has merge conflicts that must be resolved before it can be
merged. @WoosukKwon please rebase it. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
FYI,
I did a quick performance benchmark for microsoft/Phi-3.5-vision-instruct when you have a separate process for mm_mapper (on the old version of this PR) and when you don't have a separate process. Results below show that separate process has large TTFT overhead, even when the RPS goes up (which is a bit surprising) - I think it is related to pickle/socket overheads most likely. I did some manual timings specifically on the roundtrip times to the separate process and I saw that mm_mapper is 5X slower with separate process than simply running directly.
RPS | V0 - TTFT | V1 - TTFT (separate process mm_mapper) |
---|---|---|
1 | 67.05 | 127.99 |
5 | 73.23 | 143.1 |
10 | 84.28 | 190.66 |
RPS | V0 - TPOT | V1 - TPOT (separate process mm_mapper) |
1 | 14.27 | 14.44 |
5 | 17.59 | 18.89 |
10 | 25.47 | 27.8 |
When there is no separate process, the performance looks much better:
RPS | V0 - TTFT | V1 - TTFT (direct mm_mapper) |
---|---|---|
1 | 67.05 | 69.91 |
5 | 73.23 | 78.47 |
10 | 84.28 | 89.23 |
RPS | V0 - TPOT | V1 - TPOT (direct mm_mapper) |
1 | 14.27 | 13.10 |
5 | 17.59 | 14.19 |
10 | 25.47 | 16.17 |
The commands are used are:
server: vllm serve microsoft/Phi-3.5-vision-instruct --trust-remote-code --max-model-len 4096 --enforce-eager --disable-async-output-proc
client: python benchmarks/benchmark_serving.py --backend openai-chat --base-url http://0.0.0.0:8000/v1 --endpoint /chat/completions --model microsoft/Phi-3.5-vision-instruct --dataset-path lmms-lab/LLaVA-OneVision-Data --dataset-name hf --hf-subset "chart2text(cauldron)" --hf-split train --num_prompts=100 --request-rate 5
Thanks for the benchmarking. Could you also benchmark throughput? I suppose the benefit of separate processes should be more obvious in throughput instead of latency, as long as we pipeline mm_mapper well?
In what situation this limitation would hurt the performance?
223 | 294 | self.running_reqs_data[request.request_id] = req_data | |
224 | 295 | return req_data | |
225 | 296 | ||
297 | def _schedule_encoder_inputs( | ||
298 | self, | ||
299 | request: Request, | ||
300 | num_computed_tokens: int, | ||
301 | num_new_tokens: int, | ||
302 | encoder_budget: int, | ||
303 | ) -> Tuple[List[int], int]: |
Please docstring this function for readability.
Added. Thanks for the suggestion.
495 | 517 | """ | |
496 | 518 | if intermediate_tensors is not None: | |
497 | 519 | inputs_embeds = None | |
498 | else: | ||
499 | image_input = self._parse_and_validate_image_input(**kwargs) | ||
500 | if image_input is not None: | ||
501 | vision_embeddings = self._process_image_input(image_input) | ||
502 | inputs_embeds = self.language_model.model.get_input_embeddings( | ||
503 | input_ids) | ||
504 | |||
505 | inputs_embeds = merge_multimodal_embeddings( | ||
506 | input_ids, inputs_embeds, vision_embeddings, | ||
507 | self.config.image_token_index) | ||
508 | else: | ||
509 | inputs_embeds = self.language_model.model.get_input_embeddings( | ||
510 | input_ids) | ||
511 | |||
512 | # always pass the input via `inputs_embeds` | ||
513 | # to make sure the computation graph is consistent | ||
514 | # for `torch.compile` integration | ||
515 | input_ids = None | ||
520 | elif inputs_embeds is None: | ||
521 | vision_embeddings = self.process_mm_inputs(**kwargs) | ||
522 | # always pass the input via `inputs_embeds` | ||
523 | # to make sure the computation graph is consistent | ||
524 | inputs_embeds = self.get_inputs_embeds(input_ids, |
If we're putting the encoder forward pass and embedding merge at model_runner
level, then I don't think the code here is needed? (Is it possible for inputs_embeds
to be None
here when there's multimodal data in the request? If not, we just need to call embed_tokens
here to get the text embeddings)
nvm - I see that it's needed here to be compatible with V0 - I will add a note accordingly in my PR to indicate that this needs to be cleaned up after we fully deprecate v0
This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @WoosukKwon.
This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @WoosukKwon.
@ywang96 Addressed comments. PTAL.
@ywang96 This PR actually requires adding get_input_embeddings
method to all models (while I only added it to llama, opt, llava, and phi3v in this PR), because it know executes the model's embedding layer and the other parts separately.
If we don't want to add this method to the text models, we can use self.model.model.get_input_embeddings
instead, while it looks a bit hacky.
@WoosukKwon Overall looks good to me! I left a few more comments mainly around code clarifications so please take a look.
@WoosukKwon Everything looks good to me now - can you merge with main after #10272 is merged for the test fix? After that we can merge this.
93 | 97 | """Add request to the scheduler.""" | |
94 | 98 | ||
95 | 99 | req = Request.from_engine_core_request(request) | |
100 | # FIXME(woosuk): The input mapping (e.g., PIL images to tensors) may | ||
101 | # take 10-50 ms, which can cause a spike in the latency. We should | ||
102 | # consider moving this to a separate thread. | ||
103 | if req.mm_data: | ||
104 | req.mm_inputs = self.mm_input_mapper.process_inputs( | ||
105 | req.mm_data, req.mm_processor_kwargs) |
One very nice property of V0 + #8348 is that the input mapper can be skipped entirely if the multimodal item is covered by the prefix cache (in our use case with Ultravox we can have many audio chunks in each inference). Not sure if that's practical to preserve in V1?
Login to write a write a comment.
This PR implements the basic vision language model support in V1.
Motivation
Multi-modal inputs are difficult to deal with because they often have complex (or non-trivial) dependencies. For example, the model can take a prompt with interleaved texts and images like
Here, different colors represent different types of dependencies:
In V0, we didn't consider those dependencies. V0 circumvented it by always processing the entire prompt (all images & text) at once. However, this is not desirable, since it doesn't fit with other optimizations such as chunked prefills and prefix caching.
Proposal
To address this limitation, this PR proposes to make the V1 scheduler consider & track these dependencies explicitly, and do flexible & fine-grained scheduling based on it. One example can be like following:

Implementation
Limitations
Misc
To reduce the conflicts, I reverted back the changes in detokenizer. Plus, the MM input mapper will run on the same process as the engine (scheduler) for now. We will move it to a separate process later.