[Bugfix] Fix precision loss in LoRA-wrapped RowParallelLinear by fusing bias into GEMM #28972
[Bugfix] Fix precision loss in LoRA-wrapped RowParallelLinear by fusiā¦
58e30d49
prashanth058
force pushed
from
b58afac5
to
58e30d49
22 days ago
jeejeelee
approved these changes
on 2025-11-20
jeejeelee
enabled auto-merge (squash) 21 days ago
jeejeelee
merged
0cca9b4d
into main 21 days ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub