Go
Home
Pricing
FAQ
Install
Home
Pricing
FAQ
Install
Login
via GitHub
huggingface/accelerate
Pull Requests
Commits
Open
Closed
enable big_model_inference on xpu
#3595 opened 2025-05-27 07:29 by
yao-matrix
Update Gaudi Runners
#3593 opened 2025-05-26 13:46 by
IlyasMoutawwakil
Remove device_count for TPU launcher to avoid initializing runtime
#3587 opened 2025-05-24 16:18 by
sorgfresser
[FSDP2, Do not merge] FP8 adjustments
#3585 opened 2025-05-22 14:27 by
S1ro1
feat: use datasets.IterableDataset shard if possible.
#3583 opened 2025-05-22 09:11 by
ValMystletainn
Add CustomTypesDataLoader for handling any iterable dataloader
#3519 opened 2025-04-19 05:26 by
ved1beta
WIP: Compose TP + DDP/FSDP2
#3498 opened 2025-04-10 16:10 by
S1ro1
(fix) remove sampler_is_batch_sampler code in prepare_data_loader(..)
#3469 opened 2025-03-31 13:10 by
suzyahyah
Add bf16/fp16 support for amp with mps device
wip
#3373 opened 2025-01-29 14:34 by
SunMarc
[WIP] optimize infer_auto_device_map for multi-GPU allocation
wip
#3321 opened 2025-01-02 20:05 by
Nech-C
Some adjustment for supporting Deepspeed-Ulysses
wip
#2877 opened 2024-06-20 16:50 by
zeyugao
handle weight sharing with init_on_device
wip
#2737 opened 2024-05-03 01:07 by
aws-rhsoln
[DO NOT MERGE] add all level buffer support when computing infer_auto_device_map
enhancement
feature request
wip
#2663 opened 2024-04-12 12:52 by
SunMarc
Enable cpu offload with weights inside the module
wip
#2214 opened 2023-12-04 23:11 by
SunMarc