vllm
gptq marlin quantization support for fused moe with lora
#30254
Open
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
1
Changes
View On
GitHub
gptq marlin quantization support for fused moe with lora
#30254
Bhanu068
wants to merge 1 commit into
vllm-project:main
from
Bhanu068:gptq_moe_lora_feat
gptq marlin quantization support for fused moe with lora
6a6d0809
Bhanu068
requested a review
from
mgoin
18 hours ago
Bhanu068
requested a review
from
robertgshaw2-redhat
18 hours ago
Bhanu068
requested a review
from
tlrmchlsmth
18 hours ago
Bhanu068
requested a review
from
yewentao256
18 hours ago
Bhanu068
requested a review
from
pavanimajety
18 hours ago
gemini-code-assist
commented on 2025-12-08
chatgpt-codex-connector
commented on 2025-12-08
jeejeelee
commented on 2025-12-09
Login to write a write a comment.
Login via GitHub
Reviewers
jeejeelee
chatgpt-codex-connector
gemini-code-assist
mgoin
robertgshaw2-redhat
tlrmchlsmth
yewentao256
pavanimajety
Assignees
No one assigned
Labels
None yet
Milestone
No milestone
Login to write a write a comment.
Login via GitHub