Go
Home
Pricing
FAQ
Install
Home
Pricing
FAQ
Install
Login
via GitHub
vllm-project/vllm
Pull Requests
Commits
codex/update-arch-overview-md-with-vllm-v1-details
7snzwi-codex/change-default-logging-behavior
acc-rate
amd_dev
amd_mori
amd-ci
andy-neuma-testing
apply-refactor-to-ct
batched_triton_fallback
bench-latency
benchmark_serving_test
bind_kv_caches
build-flashinfer-aot-wheel
codex/add-auto-max-model-length-setting
codex/add-pandas-and-datasets-to-requirements
codex/change-default-logging-behavior
codex/remove-raydistributedexecutor-from-v0-engine
codex/remove-virtual-engine-from-codebase
codex/remove-vllm-v0-engine-references-from-docs
codex/update-arch-overview-md-with-vllm-v1-details
copilot/fix-31e676e9-a4af-4ed2-b74d-19d27f0a57b2
copilot/fix-584be906-f283-4e17-8776-c14111357ee7
copilot/fix-56244f30-e76a-41ed-beaf-3bc9de22a2c9
copilot/fix-870996da-9146-438e-9a52-cdc6c1743086
copilot/fix-c6914add-1b66-46d0-9948-c2e7b6f2259f
copilot/fix-cudagraph-flag-combination
correct-docs-cuda-version
dbo-cudagraph-size-cherry
deep_full_cudagraph_fix
deepep_tweaks
deepseek_optimizations_alex_rob
dependabot/github_actions/actions/checkout-5.0.0
disable-sd
dockerfile-nvcc-compress
eplb_policy_log_fix
fix_ds_eagle
fix_use_ep
fix-aiter-mixtral
fix-doc-build
fix-flashinfer-experts-quant-config-hack
fix-hashing-partial-blocks
fix-precommit
fp8_ep_dp
full_cudagraph
gemma3n-mm
ghsa-mcmc-2m55-j8jj
gpu_ids2
gpu-ids
il_tool
jax-tpu
kevin_h100
khluu/clean_apt
khluu/nccl
khluu/sync_ci_1230
khluu/test_fixed_premerge
khluu/test_latest_feat
khluu/test_pull_through_cache
khluu/test_rebase
khluu/test_us_east_1
khluu/test
khluu/try_moc
khluu/use_ccache_premerge
khluu/0.11.1
khluu/8gpu_h200
khluu-patch-1
low_latency_opt
lwilkinson/cg-support
lwilkinson/dbo-full-cudagraphs
lwilkinson/eagle-piecewise
lwilkinson/potential-cutlass-mla-fix
lwilkinson/refactor-cmake
main
mamba_tests
marlin_gptoss_swiglu
maybe_fix_hang_2
mergify/houseroad/config-update
minus_x
mk-init-refactor-poc
mla_cuda_graphs
mla_decode_any_head
mla-support-awq-marlin
moondream2
optimize-prefix-caching-scheduling
pd_scheduling
pil_image
qwen25vl
rebased_fi_moe
reduce_scatter_comm
refactor-modelopt-fp8-modular-kernel
releases/v0.9.0
releases/v0.9.1
releases/v0.9.2
releases/v0.10.0
releases/v0.10.1
releases/v0.10.2
releases/v0.11.0
releases/v0.11.1
releases/v0.11.2
releases/v0.12.0
releases/v0.13.0
remove_mamba_ssm
revert-21550-chengji/fix-ci
revert-22299-main
revert-26740-wentao-optimize-startup-log-2
revert-27532-lwilkinson/upconvert-all-2
revert-27600-torch-utils-import
revert-29385-eplb_nightly_ci
running-deque
seemethere/cuda_arm64
simon-mo-patch-1
skip-lmfe-tests
split_kv_cache_init
support_global_dp_logging
test-debug-lb
test-docker-cache
tms/distributed_timeout
topk_id_hack
torch_dynamo
tpu_v1_optimized
tpu_v1
update_from_kv_xfer_finished_race_fix
use-uv-python-for-docker
v0.8.0
v0.8.1
v0.8.2
v0.8.3
v0.8.4
v0.8.5
v1-sched-interface-2
v1_fix_profiler
verbose-prime-rl-ci
wentao-fix-python-install-ci-error
wentao-fix-qwen3vl-launch-bug
wentao-fix-torch-compile-issue
wentao-revert-torch-warning
wentao-update-torch-to-2.9.1
whisper-translate
wide_ep_working_branch
wide_ep_working_branch_2
woosuk/fa3-swa-cudagraph
woosuk/flashinfer-swa
woosuk/remove-req-idx-mapping
woosuk/rm-add-init-env
woosuk/router-nixl
woosuk/sampled-token-ids
woosuk/test-router
woosuk/v2-logit-bias
woosuk/v2-penalties
woosuk-jf
wye-refactor-w8a8-quant
zhuohan/moe-kernel-experiment
zhuohan/remove-redundant-argument
zhuohan/remove-virtual-engine
zhuohan/revert-26709
[V1] Support multiple kv connectors (#17564)
mgoin
committed
233 days ago
Verified
2142035b
[CI] Fix race condition in test_kv_cache_events test (#18169)
russellb
committed
233 days ago
Verified
78aa341d
Add support for loading torchao models with `AOPerModuleConfig` (#17826)
jerryzh168
committed
234 days ago
Verified
79747367
[V1] Structured Outputs + Thinking compatibility (#16577)
aarnphm
committed
234 days ago
Verified
2fc9075b
[Kernel] Have rotary embeddings support tensors (#18046)
LucasWilkinson
committed
234 days ago
Verified
d93c976a
[Frontend] decrease import time of vllm.multimodal (#18031)
davidxia
committed
234 days ago
Verified
749f7925
[CI] Disable Failing Tests (#18165)
robertgshaw2-redhat
committed
234 days ago
Verified
85686500
Modularize fused experts and integrate PPLX kernels (#15956)
bnellnm
committed
234 days ago
Verified
f9c069c8
[V1][Spec Decode] Share input embedding of target model with EAGLE draft model to free ~1GB for llama 3 model (#17326)
ekagra-ranjan
committed
234 days ago
Verified
418d2f8b
[Doc] Update prefix cache metrics to counting tokens (#18138)
heheda12345
committed
234 days ago
Verified
964472b9
[KVConnector] Keep KVTransferParams as a dict (#18033)
njhill
committed
234 days ago
Verified
59dd311c
[Bugfix] Fix chat utils tests (#18139)
DarkLight1337
committed
234 days ago
Verified
d066e520
Update deprecated type hinting in `platform`, `plugins`, `triton_utils`, `vllm_flash_attn` (#18129)
hmellor
committed
234 days ago
Verified
c8ea982d
Update deprecated type hinting in `vllm/device_allocator` and `vllm/distributed` (#18126)
hmellor
committed
234 days ago
Verified
dc372b9c
Update deprecated type hinting in `vllm/lora` (#18128)
hmellor
committed
234 days ago
Verified
9b5b39b6
[doc] add missing import (#18133)
reidliu41
committed
234 days ago
Verified
9ccc6ded
[Model] GritLM supports other attention backends (#18109)
DarkLight1337
committed
234 days ago
Verified
d62a076e
[Bugfix] Fix LoRA test (#18123)
jeejeelee
committed
234 days ago
Verified
259127f8
[FEAT] [ROCm]: Add AITER CK 2 Stages MoE support (#17110)
tjtanaa
committed
234 days ago
Verified
612c2edb
[Bugfix] Fix QKVCrossParallelLinear::sync_weight_attrs for PyTorch compile (#17844)
anko-intel
committed
234 days ago
Verified
38fe728d
[Misc] replace does not exist model (#18119)
lengrongfu
committed
234 days ago
Verified
82e7f9bb
[Model] Add packed_modules_mapping for Qwen3-MOE (#18118)
jeejeelee
committed
234 days ago
Verified
63dc3426
[Bugfix] Fix entrypoints audio test failure (#18111)
DarkLight1337
committed
234 days ago
Verified
8f5dc414
[New Model]: support GTE NewModel (#17986)
noooop
committed
234 days ago
Verified
63ad6222
[Bugfix][Example] make lmcache v0 work. (#18051)
majianpeng
committed
234 days ago
Verified
e7ef61c1
[Bugfix] fix moe marlin `topk_weight` loading (#18080)
jinzhen-lin
committed
234 days ago
Verified
d4154c35
[Fix] Move "model_config" as keyword args in chat_utils.py (#18098)
lk-chen
committed
234 days ago
Verified
6685890d
Fix broken example: examples/offline_inference/profiling at scheduler_config (#18117)
Ecthlion
committed
234 days ago
Verified
33011318
[BugFix][AMD] Compatible patch for AITER lib after 04/20 (#17912)
qli88
committed
234 days ago
Verified
4f8b3732
[AMD][torch.compile] Enable silu+fp8_quant fusion for rocm (#18082)
charlifu
committed
234 days ago
Verified
7b2f28de
Newer
Older