Go
Home
Pricing
FAQ
Install
Home
Pricing
FAQ
Install
Login
via GitHub
huggingface/accelerate
Pull Requests
Commits
Open
Closed
Disable hook compile
#3888 by
SunMarc
was merged 2025-12-17 15:53
Update support of Megatron-LM PR 2
#3887 by
pengdurice
was merged 2025-12-16 13:00
Fix: Remove duplicate W&B initialization in offline mode (#3818)
#3886 by
shantanugupta2004
was merged 2025-12-16 13:10
using `spawn` instead of `fork` for XPU device
#3884 by
kaixuanliu
was merged 2025-12-15 11:22
Remove ipex
#3883 by
yao-matrix
was merged 2025-12-15 11:22
Fix KeyError in extract_model_from_parallel for partial torch.compile
#3881 by
amanzoni1
was merged 2025-12-16 13:00
[DeepSpeed] scale grad for zero-2
#3880 by
kashif
was closed 2025-12-11 20:37
Fix FSDP2 tied embedding errors with targeted ValueError guidance
#3878 by
amanzoni1
was merged 2025-12-11 13:02
Avoid using nvidia-smi on a CPU-only Colab instance
#3872 by
FlorianVal
was merged 2025-12-04 15:35
fix: mixed_precision param from accelerate config should be used for FSDP mp
#3864 by
kmehant
was closed 2025-12-02 09:03
[SP and CP] error out if both CP and SP enabled
#3862 by
kashif
was merged 2025-11-28 14:36
[SP] fix loss computation example
bug
#3858 by
kashif
was merged 2025-11-28 10:43
add MS-AMP deprecation warnings
#3857 by
neha222222
was merged 2025-12-08 13:41
Fix execution with Transformer Engine
#3852 by
ksivaman
was merged 2025-12-01 13:36
Update PR template, setup.py author email
#3851 by
tomaarsen
was merged 2025-11-25 11:15
Allow non-Tensor values in a batch with `dispatch_batches=True`
#3850 by
tomaarsen
was merged 2025-11-26 16:57
Upcast FSDP2 parameters only if requires_grad
#3848 by
ojh31
was merged 2025-11-26 15:03
feat: added fine tuning example focused on TPUs
#3847 by
tengomucho
was merged 2025-11-24 16:40
fix module and optimizer parameter mismatch before prepare_tp_
#3845 by
naomili0924
was merged 2025-11-27 11:00
device type helper
#3843 by
kashif
was merged 2025-11-21 11:11
Updating support of Megatron-LM
#3842 by
pengdurice
was merged 2025-12-03 13:15
use self hosted runner
#3841 by
SunMarc
was merged 2025-11-20 13:12
Add num_processes and parallelism_config parameters
#3837 by
MihaiB-dev
was closed 2025-12-21 15:07
[Bug] Update torch.optim.Optimizer parameter states after tensor parallelism
#3835 by
naomili0924
was merged 2025-11-19 13:04
ArXiv -> HF Papers
#3834 by
qgallouedec
was merged 2025-11-10 12:51
Fix FP8 torchao default config with padding and FSDP2 all-gather support
#3831 by
shimizust
was merged 2025-12-03 13:16
update typo in bnb quantisation 4bit flag docstring
#3828 by
hbraith
was merged 2025-11-04 14:26
Fix typo in broadcast_object_list docstring
#3823 by
wsntxxn
was merged 2025-11-13 15:01
decouple backward and step in accelerator/deepspeed
#3819 by
naomili0924
was closed 2025-12-16 15:09
Deepspeed Ulysses/ALST integration
#3817 by
stas00
was merged 2025-11-20 17:24
Older