Go
Home
Pricing
FAQ
Install
Home
Pricing
FAQ
Install
Login
via GitHub
huggingface/accelerate
Pull Requests
Commits
v0.18-release
3d-parallelism
argparse
better-err
big_api
check-docs
check-for-nccl
composable-tp
context-parallel
context-parallel-experiments
context-parallel-flex-attn
cp-dataloader
cp-pc
dataloader-log
debug-tests
deepspeed-inference
deepspeed-version
device_map_xla_support
disable-seedale-rs
enable-dash
faster-import
feat/async-checkpointing
feat-decorator-to-purge-modified-accelerate-env-vars
fix
fix-compile-regions
fix-deepspeed-autobs
fix-dispatch-model-tied-params-memory
fix-fp8
fix-generate
fix-grad-norm
fix-pjrt_device
fix-prod
fix-stateful
fix-warnings
fork-tester
fp8-gradient-checkpointing
fp8-stuff
fsdp2-tp
fully-remove-accelerate-config
grad-acc-optimizer-fixes
grad-accum-test
import-util
llama-to-mistral
load-model-across-devices
low-bit-fsdp2
main
make-version-tests-better
mishig25-patch-1
mishig25-patch-2
mixed-precision-experiments
ms-amp
muellerzr-ds-debugging
muellerzr-fix-1.0
muellerzr-fp8-deepspeed-support-v2
muellerzr-msamp-ds-fsdp
muellerzr-nightly-fixings
muellerzr-stateful-dl
new-instance-type
nouamane/context-parallel
parallelism-config
pin-ruff
pip-uv
pippy-duplicates
pippy-integration
reaction-based-runs
release-v0.6.1
release-v0.6.2
revert-3671
revert-fsdp-improv
revert-pr
rm-112
runner
safetensors-default
slack-reporter
speedup-docker
test-data
test-deepspeed-unpin
torch-22
trainer-tests
transformers-nd-parallel
ulysses-sp
unfreeze-4090
use-partialstate
uv-take2
v0.7-release
v0.12-release
v0.13-release
v0.14-release
v0.15-release
v0.16-release
v0.17-release
v0.18-release
v0.19-release
v0.20-release
v0.21-release
v0.22-release
v0.23-release
v0.24-release
v0.25.0-release
v0.26.0-release
v0.26.1-release
v0.27.0-release
v0.28.0-release
v0.29.0-release
v0.30.0-release
v0.31.0-release
v0.32.0-release
v0.33.0-release
v0.34.0-release
v1.0.0-release
v1.1.0-release
v1.2.0-release
v1.3.0-release
v1.4.0-release
v1.5.0-release
v1.6.0-release
v1.7.0-release
v1.8.0-release
v1.9.0-release
v1.10.0-release
v1.11.0-release
v1.12.0-release
wip-from-pretrained
xla-gpu-runners
Release: v0.18.0
sgugger
committed
2 years ago
Verified
ecd12888
Handle multiple tied parameters (#1241)
sgugger
committed
2 years ago
Verified
a826e444
Hardware Auto-Setup Example/Tutorial for Distributed Launch (#1227)
Caroline Chen
committed
2 years ago
Verified
1fe27e7c
Change multinode to multigpu (#1247)
muellerzr
committed
2 years ago
Verified
c1a6c209
backfill ds plugin attributes when using ds_config (#1235)
pacman100
committed
2 years ago
Verified
8ebd6ab2
remove empty dicts while saving accelerate config (#1236)
pacman100
committed
2 years ago
Verified
ea9b8547
extensions has been removed and replaced by customizations (#1075)
dbpprt
committed
2 years ago
Verified
420ff21c
Make grad accum steps mutable on the Accelerator object (#1233)
muellerzr
committed
2 years ago
Verified
b1b33127
add additional check before deleting env variable (#1229)
Chris-hughes10
committed
2 years ago
Verified
6e4e8702
Silence dynamo_backend (#1226)
muellerzr
committed
2 years ago
Verified
a3065e18
docs: add finetuner to ppl who use accelerate (#1224)
Wang Bo
committed
2 years ago
Verified
4eaf36e1
Fix get_logger kwarg documentation issue (#1222)
bcol23
committed
2 years ago
Verified
e7bb060c
Fix bug in loading launch config (#1218)
neumyor
committed
2 years ago
Verified
a15d3074
FIx TPU gradient state (#1219)
muellerzr
committed
2 years ago
Verified
7e7f3445
ds offload optim fix to use CPUAdam (#1208)
pacman100
committed
2 years ago
Verified
10c67463
Fix example in accumulate method (#1211)
VikParuchuri
committed
2 years ago
Verified
82c2665c
Fix typo in TPU config (#1202)
muellerzr
committed
2 years ago
Verified
2930cac6
Better error message when using multi-GPU and Accelerate on torch <1.9.1 (#1203)
muellerzr
committed
2 years ago
Verified
901ab69a
Fix tied weights load (#1204)
sgugger
committed
2 years ago
Verified
780e4aa3
Make the Scheduler adjust the steps taken relative to the gradient accumulation steps (#1187)
muellerzr
committed
2 years ago
Verified
e4620984
Fixup --fsdp (#1198)
muellerzr
committed
2 years ago
Verified
017a98c0
[`Accelerator`] We should not call `to` on modules that wraps `accelerate` loaded models (#1172)
younesbelkada
committed
2 years ago
Verified
d1aa5581
Set drop last to ensure modulo16 restriction for fp8 (#1189)
ksivaman
committed
2 years ago
Verified
41479fe4
Only convert linear layers with weights multiple of 16 (#1188)
sgugger
committed
2 years ago
Verified
eac5d13c
add `use_orig_params` to FullyShardedDataParallelPlugin (#1184)
pacman100
committed
2 years ago
Verified
b228136c
Add documentation about PyTorch FSDP state dict behavior (#1181)
VikParuchuri
committed
2 years ago
Verified
90deb748
Support special mapping of dtypes when preparing device map (#1179)
sgugger
committed
2 years ago
Verified
d9427087
fixed typo in launch.py tpu_pod_launcher (#1180)
hackpert
committed
2 years ago
Verified
37831808
Add repr to AlignHook for easier debugging. (#1177)
sgugger
committed
2 years ago
Verified
ea836f30
Run accelerate_test in cli (#1176)
muellerzr
committed
2 years ago
Verified
a4c94762
Older