Go
Home
Pricing
FAQ
Install
Home
Pricing
FAQ
Install
Login
via GitHub
intel/auto-round
Pull Requests
Commits
Open
Closed
Fix missing extra_config export for unsupported ignore_layers like `mlp.gate`
#1660 opened 2026-04-04 14:40 by
lvliang-intel
fix autoscheme accuracy drop bug w/o low_gpu, add CI test
#1658 opened 2026-04-03 10:06 by
WeiweiZhang1
0.12.1
try to support gemma4
#1656 opened 2026-04-03 07:50 by
wenhuach21
add support for gemma4 model
#1655 opened 2026-04-03 07:25 by
n1ck-guo
fix gguf issue in alg_ext.py
#1649 opened 2026-04-02 09:44 by
wenhuach21
Enable low_cpu_mem_usage for mxfp/nvfp
#1648 opened 2026-04-02 08:26 by
Kaihui-intel
support WOQ model input, such as kimi2.5
#1642 opened 2026-03-31 03:39 by
xin3he
[not4landing]hadamard change
#1641 opened 2026-03-31 03:11 by
wenhuach21
fix nextstep loading issue
#1640 opened 2026-03-30 12:59 by
xin3he
[mllm] support longcat_next
#1637 opened 2026-03-30 06:28 by
xin3he
[Draft] Support TurboQuant KV-cache quantization
#1634 opened 2026-03-27 13:28 by
lvliang-intel
Support ByteDance-Seed/BAGEL-7B-MoT quantization in w4a16 format
#1633 opened 2026-03-27 12:44 by
lvliang-intel
Support diffusion model AIDC-AI/Ovis-Image-7B quantization
#1616 opened 2026-03-25 12:50 by
lvliang-intel
Enhance performance test
#1610 opened 2026-03-25 06:30 by
XuehaoSun
feat: add --dry-run estimation mode
#1592 opened 2026-03-22 06:05 by
mvanhorn
Refactor module access to use PyTorch get_submodule / set_submodule
#1590 opened 2026-03-20 15:32 by
scopophobic
Add Google-style docstrings to auto_round core modules and data_type/utils
#1559 opened 2026-03-18 05:33 by
Copilot
new architecture for auto_round
api/new
engineering
#1542 opened 2026-03-13 02:08 by
n1ck-guo
0.12.0
[N4Landing]update
draft
#1538 opened 2026-03-12 09:15 by
wenhuach21
Enhance llmc CI on GPU and XPU
#1483 opened 2026-03-02 08:41 by
chensuyue
0.13.0
Add asym for XPU backend.
#1316 opened 2026-01-22 03:39 by
luoyu-intel
Robust FP8 layer detection for ignore_layers (#1283)
#1289 opened 2026-01-15 14:21 by
scopophobic
Fix ignore_layers not working for FP8 models
#1286 opened 2026-01-15 04:15 by
Copilot
[WIP][refactor quanizers][step 1] refactor rtn and tuning
#1278 opened 2026-01-14 08:52 by
n1ck-guo
add per-task lm_eval args for exprimental usage
Stale
#1017 opened 2025-11-11 07:22 by
WeiweiZhang1