support fp8 model and str as input in llm quantization #699
try to support fp8 model as input
b35c9488
[pre-commit.ci] auto fixes from pre-commit.com hooks
f2d4e9c7
Merge branch 'main' into update_0731
a0bc53e0
[pre-commit.ci] auto fixes from pre-commit.com hooks
7ab9b0a6
fix
459cb724
Merge branch 'update_0731' of https://github.com/intel/auto-round int…
4d711b8c
fix and add ut
186f84e7
wenhuach21
changed the title support fp8 model as input support fp8 model and str as input in llm quantization 173 days ago
[pre-commit.ci] auto fixes from pre-commit.com hooks
e5b4e3c9
refine
e3e5b6ee
[pre-commit.ci] auto fixes from pre-commit.com hooks
a1374c48
fix preci issue
0c9e7d74
[pre-commit.ci] auto fixes from pre-commit.com hooks
bfafc6d4
n1ck-guo
approved these changes
on 2025-08-05
fix ut
09c3ccfa
Merge branch 'update_0731' of https://github.com/intel/auto-round int…
d4a5ee98
wenhuach21
deleted the update_0731 branch 173 days ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub