vllm
0cca9b4d
- [Bugfix] Fix precision loss in LoRA-wrapped RowParallelLinear by fusing bias into GEMM (#28972)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
19 days ago
[Bugfix] Fix precision loss in LoRA-wrapped RowParallelLinear by fusing bias into GEMM (#28972) Signed-off-by: prashanth058 <prashanth.dannamaneni@uipath.com>
References
#28972 - [Bugfix] Fix precision loss in LoRA-wrapped RowParallelLinear by fusing bias into GEMM
Author
prashanth058
Parents
a8c53682
Loading