Go
Home
Pricing
FAQ
Install
Home
Pricing
FAQ
Install
Login
via GitHub
intel/auto-round
Pull Requests
Commits
Open
Closed
Robust FP8 layer detection for ignore_layers (#1283)
#1289 opened 2026-01-15 14:21 by
scopophobic
fix low_cpu new
#1288 opened 2026-01-15 07:00 by
wenhuach21
fix bug of format
#1287 opened 2026-01-15 05:05 by
n1ck-guo
Fix ignore_layers not working for FP8 models
#1286 opened 2026-01-15 04:15 by
Copilot
Preserve FP8 format for ignored layers in FP8 models
#1285 opened 2026-01-15 04:11 by
Copilot
Update version
#1282 opened 2026-01-15 02:39 by
XuehaoSun
Fix fp_layers kwarg forwarded as ignore instead of ignore_layers
#1281 opened 2026-01-15 01:37 by
Copilot
[WIP][refactor quanizers][step 1] refactor rtn and tuning
#1278 opened 2026-01-14 08:52 by
n1ck-guo
refine low_cpu
#1270 opened 2026-01-13 07:59 by
wenhuach21
refine moe modellings to reduce peak ram usage
#1265 opened 2026-01-13 06:46 by
WeiweiZhang1
fix XPU CI hang issue
#1261 opened 2026-01-12 11:10 by
xin3he
(feat): add support for g2 fp8 on cpu with LUT
#1254 opened 2026-01-10 11:19 by
SwekeR-463
Fix low cpu
#1253 opened 2026-01-09 11:12 by
wenhuach21
fix disable_opt_rtn spelling error
#1250 opened 2026-01-09 02:19 by
WeiweiZhang1
extend compatible test
#1131 opened 2025-12-12 02:50 by
chensuyue
add per-task lm_eval args for exprimental usage
#1017 opened 2025-11-11 07:22 by
WeiweiZhang1
[WIP] [STEP 2] split compressor into few quantizers
#841 opened 2025-09-23 00:25 by
n1ck-guo