transformers
[Quantization] Fix Static FP8 Quantization
#42775
Merged

[Quantization] Fix Static FP8 Quantization #42775

MekkCyber merged 7 commits into main from revert-fp8
MekkCyber
MekkCyber fix
d61fcb97
MekkCyber fix style
e7538e06
SunMarc
SunMarc approved these changes on 2025-12-10
MekkCyber Update src/transformers/integrations/finegrained_fp8.py
9c9bfc30
SunMarc
patrickvonplaten
patrickvonplaten commented on 2025-12-10
HuggingFaceDocBuilderDev
MekkCyber fix
e71df508
MekkCyber style
de715607
MekkCyber MekkCyber changed the title [Quantization] Upcast to FP8 for per tensor quantization [Quantization] Fix Static FP8 Quantization 86 days ago
SunMarc
SunMarc commented on 2025-12-10
MekkCyber Update src/transformers/integrations/finegrained_fp8.py
4739e205
SunMarc
github-actions
github-actions[bot] Apply style fixes
b36adad1
MekkCyber MekkCyber merged 15735a43 into main 86 days ago
MekkCyber MekkCyber deleted the revert-fp8 branch 86 days ago

Login to write a write a comment.

Login via GitHub

Assignees
No one assigned
Labels
Milestone