Go
Home
Pricing
FAQ
Install
Home
Pricing
FAQ
Install
Login
via GitHub
intel/auto-round
Pull Requests
Commits
Open
Closed
fix dynamic int8 w8a8 export issue with tuning
#1525 opened 2026-03-10 04:02 by
thuang6
support hadamard transform for mxfp4/nvfp4 with rtn or autoround method
#1515 opened 2026-03-09 03:03 by
lkk12014402
0.12.0
Support GLM-Image model quantizaiton
#1512 opened 2026-03-08 13:24 by
lvliang-intel
Fix #1284: preserve FP8 format for layers specified in ignore_layers
#1511 opened 2026-03-08 02:11 by
LuciferDono
Support block-wise fp8 quant
#1487 opened 2026-03-03 05:44 by
mengniwang95
Enhance llmc CI on GPU and XPU
#1483 opened 2026-03-02 08:41 by
chensuyue
0.12.0
Enable CUDA CI
#1473 opened 2026-02-27 02:55 by
XuehaoSun
0.12.0
Support Qwen3 and Qwen2.5 Omni model quantization
#1404 opened 2026-02-04 14:48 by
lvliang-intel
[Experimental][Won't Merge] DDP PoC
won't merge
#1391 opened 2026-02-04 01:47 by
yiliu30
Refactor module access to use PyTorch get/set_submodule API
#1365 opened 2026-01-29 05:39 by
scopophobic
support hadamard transform for mxfp4 with rtn or autoround method.
#1349 opened 2026-01-27 05:20 by
lkk12014402
refactor init of compressor
engineering
ready
#1339 opened 2026-01-26 03:04 by
n1ck-guo
Add asym for XPU backend.
#1316 opened 2026-01-22 03:39 by
luoyu-intel
Robust FP8 layer detection for ignore_layers (#1283)
#1289 opened 2026-01-15 14:21 by
scopophobic
Fix ignore_layers not working for FP8 models
#1286 opened 2026-01-15 04:15 by
Copilot
[WIP][refactor quanizers][step 1] refactor rtn and tuning
#1278 opened 2026-01-14 08:52 by
n1ck-guo
fix disable_opt_rtn spelling error
#1250 opened 2026-01-09 02:19 by
WeiweiZhang1
add per-task lm_eval args for exprimental usage
#1017 opened 2025-11-11 07:22 by
WeiweiZhang1
[WIP] [STEP 2] split compressor into few quantizers
#841 opened 2025-09-23 00:25 by
n1ck-guo