vllm
[Bugfix][Speculative Decoding] Extend Eagle quantization config fix to llama_eagle.py
#26590
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
3
Changes
View On
GitHub
[Bugfix][Speculative Decoding] Extend Eagle quantization config fix to llama_eagle.py
#26590
robertgshaw2-redhat
merged 3 commits into
vllm-project:main
from
neuralmagic:fix-26042
mergify
added
llama
mergify
added
speculative-decoding
Extend: fix from #25883 to llama_eagle.py
0e55d08a
rahul-tuli
force pushed
to
0e55d08a
208 days ago
rahul-tuli
marked this pull request as ready for review
208 days ago
rahul-tuli
changed the title
Extend: fix from #25883 to llama_eagle.py
[Bugfix][Speculative Decoding] Extend Eagle quantization config fix to llama_eagle.py
208 days ago
chatgpt-codex-connector
commented on 2025-10-13
fix precommit
1ba253e8
DarkLight1337
approved these changes on 2025-10-13
Merge branch 'main' into fix-26042
6d7a04d7
markmc
added
ready
robertgshaw2-redhat
enabled auto-merge (squash)
208 days ago
yewentao256
approved these changes on 2025-10-13
robertgshaw2-redhat
merged
e3b90c1b
into main
208 days ago
Login to write a write a comment.
Login via GitHub
Reviewers
yewentao256
DarkLight1337
chatgpt-codex-connector
Assignees
No one assigned
Labels
speculative-decoding
ready
llama
Milestone
No milestone
Login to write a write a comment.
Login via GitHub