Go
Home
Pricing
FAQ
Install
Home
Pricing
FAQ
Install
Login
via GitHub
huggingface/accelerate
Pull Requests
Commits
pippy-duplicates
3d-parallelism
argparse
better-err
big_api
check-docs
check-for-nccl
composable-tp
context-parallel
context-parallel-experiments
context-parallel-flex-attn
cp-dataloader
cp-pc
dataloader-log
debug-tests
deepspeed-inference
deepspeed-version
device_map_xla_support
disable-seedale-rs
enable-dash
feat/async-checkpointing
feat-decorator-to-purge-modified-accelerate-env-vars
fix
fix-compile-regions
fix-deepspeed-autobs
fix-dispatch-model-tied-params-memory
fix-fp8
fix-generate
fix-grad-norm
fix-pjrt_device
fix-prod
fix-warnings
fork-tester
fp8-gradient-checkpointing
fp8-stuff
fsdp2-tp
fully-remove-accelerate-config
grad-acc-optimizer-fixes
grad-accum-test
import-util
llama-to-mistral
load-model-across-devices
low-bit-fsdp2
main
make-version-tests-better
mishig25-patch-1
mishig25-patch-2
mixed-precision-experiments
ms-amp
muellerzr-ds-debugging
muellerzr-fix-1.0
muellerzr-fp8-deepspeed-support-v2
muellerzr-msamp-ds-fsdp
muellerzr-nightly-fixings
muellerzr-stateful-dl
new-instance-type
nouamane/context-parallel
parallelism-config
pin-ruff
pip-uv
pippy-duplicates
pippy-integration
reaction-based-runs
release-v0.6.1
release-v0.6.2
revert-3671
revert-fsdp-improv
revert-pr
rm-112
runner
safetensors-default
slack-reporter
speedup-docker
test-data
test-deepspeed-unpin
torch-22
trainer-tests
transformers-nd-parallel
ulysses-sp
unfreeze-4090
use-partialstate
uv-take2
v0.7-release
v0.12-release
v0.13-release
v0.14-release
v0.15-release
v0.16-release
v0.17-release
v0.18-release
v0.19-release
v0.20-release
v0.21-release
v0.22-release
v0.23-release
v0.24-release
v0.25.0-release
v0.26.0-release
v0.26.1-release
v0.27.0-release
v0.28.0-release
v0.29.0-release
v0.30.0-release
v0.31.0-release
v0.32.0-release
v0.33.0-release
v0.34.0-release
v1.0.0-release
v1.1.0-release
v1.2.0-release
v1.3.0-release
v1.4.0-release
v1.5.0-release
v1.6.0-release
v1.7.0-release
v1.8.0-release
v1.9.0-release
v1.10.0-release
v1.11.0-release
v1.12.0-release
wip-from-pretrained
xla-gpu-runners
Document, document, document
muellerzr
committed
2 years ago
3c464369
Make note about recursion loop
muellerzr
committed
2 years ago
694bb8a4
Allow for users to pass in max_meory
muellerzr
committed
2 years ago
e3cb2888
Rm typing literal, only pippy for pippy
muellerzr
committed
2 years ago
e974624c
Do it at tracing too
muellerzr
committed
2 years ago
6e1e02f6
All tests passing!
muellerzr
committed
2 years ago
5970e8e3
Use pad_input_tensor
muellerzr
committed
2 years ago
136f495c
Much cleaner implementation
muellerzr
committed
2 years ago
a45e7f96
Almost working version
muellerzr
committed
2 years ago
6864314c
add some failing test
SunMarc
committed
2 years ago
57277673
Add test
muellerzr
committed
2 years ago
7e958024
bs=1 case
muellerzr
committed
2 years ago
6167d0ba
Clean
muellerzr
committed
2 years ago
7045a290
Update the source
muellerzr
committed
2 years ago
fcc72a37
With tests
muellerzr
committed
2 years ago
fed86c4c
Refactor to utils
muellerzr
committed
2 years ago
1a9181c4
Use dataloader-like logic
muellerzr
committed
2 years ago
c7800f50
Start, need to test
muellerzr
committed
2 years ago
2e23e1e9
Test cv model
muellerzr
committed
2 years ago
8a452c18
Less slicy-dicy
muellerzr
committed
2 years ago
a98e51b0
Break early after the first valid bs is found
muellerzr
committed
2 years ago
b1c565d0
Update src/accelerate/inference.py
muellerzr
committed
2 years ago
Verified
0af31ff0
Fix test
muellerzr
committed
2 years ago
3534a342
Allow for dynamic batch paddign
muellerzr
committed
2 years ago
73e64a1e
fix case num_process=1
SunMarc
committed
2 years ago
7ca4bccf
Store split points in hf_split_points
muellerzr
committed
2 years ago
8792a8c5
Put split_points in pipelien
muellerzr
committed
2 years ago
e3f6b99b
Use no split module classes explicitly
muellerzr
committed
2 years ago
df7779aa
Tests
muellerzr
committed
2 years ago
77f8e92b
working test
muellerzr
committed
2 years ago
449eb8d9
Older