vllm
[Linear Attention] fix bug for linear attention + prefix caching + reset_prefix_cache
#35157
Merged

[Linear Attention] fix bug for linear attention + prefix caching + reset_prefix_cache #35157

heheda12345
heheda12345 fix reset kv for linear + prefix cache
9f41cf9a
mergify mergify added v1
mergify mergify added bug
gemini-code-assist
gemini-code-assist commented on 2026-02-24
tdoublep
tdoublep approved these changes on 2026-02-24
tdoublep tdoublep added ready
heheda12345
heheda12345 remove useless comments
923e8d0f
vllm-bot vllm-bot merged 8fae54fa into main 10 days ago

Login to write a write a comment.

Login via GitHub

Assignees
No one assigned
Labels
Milestone