vllm
[TPU] Update torch version to include paged attention kernel change
#19706
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
1
Changes
View On
GitHub
[TPU] Update torch version to include paged attention kernel change
#19706
yaochengji
merged 1 commit into
vllm-project:main
from
Chenyaaang:update-requirement
gemini-code-assist
commented on 2025-06-16
mergify
added
ci/build
gemini-code-assist
commented on 2025-06-16
yaochengji
added
tpu
yaochengji
added
ready
mergify
removed
tpu
yaochengji
approved these changes on 2025-06-16
yaochengji
enabled auto-merge (squash)
327 days ago
disabled auto-merge
327 days ago
Manually disabled by user
Chenyaaang
marked this pull request as draft
327 days ago
update torch version to include paged attention kernel change
d00eb38e
Chenyaaang
force pushed
to
d00eb38e
326 days ago
Chenyaaang
marked this pull request as ready for review
326 days ago
yaochengji
enabled auto-merge (squash)
326 days ago
yaochengji
merged
dac8cc49
into main
326 days ago
Login to write a write a comment.
Login via GitHub
Reviewers
yaochengji
gemini-code-assist
Assignees
No one assigned
Labels
ready
ci/build
Milestone
No milestone
Login to write a write a comment.
Login via GitHub