Go
Home
Pricing
FAQ
Install
Home
Pricing
FAQ
Install
Login
via GitHub
pytorch/benchmark
Pull Requests
Commits
fix_time
0.1
088_torchbench_torchao_updates
290-add-dump-ir
290-add-dump-ir2
Chillee-patch-1
DALLE2_pytorch
DRAFT-of-splitting-optim-benchmark
H-Huang-patch-1
T5
ZainRizvi-patch-1
aaron-add-check-device-test
aaron-add-metadata
aaron-cleanup-run
add-4wd-attention_is_all_you_need
add-4wd-fambench_dlrm
add-b200-tests
add-delta-optim
add-flux
add-license-1
add-llama_v2_7b_8h
add-moco-train-bs-comments
add-new-models-to-dense
add-optim-benchmarks
add-stable-diffusion-to-dense
adnanaziz
agunapal/add_beanmachine_pplbench
alanwaketan/ltc
alanwaketan/timm_nfnet
allow-testpy-to-skip-blacklist
always-delete-model
angelayi/update_hf_pin
another-optim-fix
another-tweak-for-maml_omniglot
atalman/r2.0.0
atalman/r2.1.0
atalman/r2.1.1
atalman/r2.2.x
atalman/r2.2.0
atalman/r2.4.0
atalman/r2.4.1
atalman/2.0.1_T4
atalman-patch-1
atalman-patch-2
atalman-patch-3
atalman-patch-4
atalman-patch-5
atencmake
bark1
bert_seq_fix
bf/register-squashed-normal-type
bf/yolo
big-oopsie
calculate_score_per_config
camyllh/fix_gym_errors_on_timm_yaml
chuanqiw/add_gptj
chuanqiw/add_iters_param
chuanqiw/add_throughput_metric
chuanqiw/cpu_userbm_fix
chuanqiw/cpu_userbm_launcher
chuanqiw/cpu_userbm_metrics
chuanqiw/inductor_quant
clean-up-gat
clean-workspace
cleanup
cleanup-torchbench-workflows
correct-learningtopaint-model
cpp_bench_tmp
cron_job
davidberard98/ddp_dynamo_patches
davidberard98/ddp_dynamo_reuse_alloc
davidberard98/ddp_experiments
davidberard98/ddp-nov07
davidberard98/ddp-nov08
davidberard98/fsdp-nov18
davidberard98/reuse_allocation_ddp
davidberard98/reuse_allocation_ddp2
davidberard98/reuse_allocation_draft
davidberard98/skip-dynamo-optimizer
ddp-fixes
default-factory
delete-unused-files
desertfire/rotary-embedding-torch
deshard
disable-optim-pt2
do-not-keep-around-the-task-spec
driazati/awsdefault
driazati/rds
enable_pdt_for_pytorch_struct
enable_script_pdt_api
enable_script_pdt_in_gen_torchvision_benchmarks.py
erichan1/add-bert-distributed-none
erichan1/add-distributed-e2e-t5
erichan1/add-distributed-readme
erichan1/bert-fsdp
erichan1/fix-deepspeed
erichan1/t5-fsdp
errors-live-in-benchmark
even-higher-threshold
exclude-cpu-reporting-and-up-delta
exclude-pt2-on-most-models
exclude-yolov3-nadam
explicit-BM-API
export-D58301337
export-D61229059
export-D61392607
export-D61809602
export-D61819148
export-D61929352
export-D62404528
export-D64208508
export-D71412041
fastNLP
findhao/add_option_to_disable_metrics
findhao/add-api-coverage-test
findhao/add-dcgm-embedded-mode
findhao/add-memusage-nvml
findhao/add-phlippe-densenet
findhao/add-resnet
findhao/addtorchexpert
findhao/enable-mem-peak
findhao/fix-bug-dcgm
findhao/fix-bug-for-multigpus
findhao/fix-dalle2
findhao/fix-dcgm-compatibility
findhao/fix-dependency
findhao/fix-if-check
findhao/fix-mem-bug
findhao/fix-pynvml
findhao/fix-pyproject
findhao/fix-typo
findhao/opbench15
findhao/opbench16
findhao/operatorbench1
findhao/operatorbench2
findhao/operatorbench3
findhao/operatorbench4
findhao/operatorbench6
findhao/operatorbench7
findhao/operatorbench8
findhao/operatorbench9
findhao/operatorbench10
findhao/operatorbench11
findhao/operatorbench13
findhao/print-break-graphs
findhao/remove-fvcore
findhao/reorg-args
findhao/rocm-test
findhao/test
findhao/update_cudnn_config
findhao/update_model_task
findhao/update-citation
findhao/update-numba
fix_dense_ctor
fix_lazy_bench1
fix_time
fix_torchtext_UDPOS
fix_torchtext_dataset_import
fix_torchtext_imports
fix_torchtext_imports_1
fix-1888
fix-1942
fix-OOMs-YAY
fix-check-device-get-module
fix-densenet-to-paper
fix-dlrm-to-paper
fix-optim-regression-detector
fix-pr-tests
fix-some-bugs
fix-tacotron2-train-to-code
fixSync
fixup-T164911652-main
fixup-T198312900-main
gelu
generate_spec_config
get-rid-of-dead-reference
gh/HDCharles/1/base
gh/HDCharles/1/head
gh/HDCharles/1/orig
gh/HDCharles/2/base
gh/HDCharles/2/head
gh/HDCharles/2/orig
gh/HDCharles/3/base
gh/HDCharles/3/head
gh/HDCharles/3/orig
gh/HDCharles/4/base
gh/HDCharles/4/head
gh/HDCharles/4/orig
gh/HDCharles/5/base
gh/HDCharles/5/head
gh/HDCharles/5/orig
gh/HDCharles/6/base
gh/HDCharles/6/head
gh/HDCharles/6/orig
gh/davidberard98/17/base
gh/davidberard98/17/head
gh/davidberard98/17/orig
gh/davidberard98/31/base
gh/davidberard98/31/orig
gh/davidberard98/32/base
gh/davidberard98/32/head
gh/davidberard98/32/orig
gh/davidberard98/33/base
gh/davidberard98/33/head
gh/davidberard98/33/orig
gh/huydhn/1/base
gh/huydhn/1/head
gh/jamesjwu/1/base
gh/jamesjwu/1/head
gh/jamesjwu/1/orig
gh/jamesjwu/2/base
gh/jamesjwu/2/head
gh/jamesjwu/2/orig
gh/robieta/error_handling
gh/taylorrobie/broken_cases
gh/taylorrobie/callgrind_scribe
gh/taylorrobie/install_logging
gh/taylorrobie/v1_isolation
gh/tugsbayasgalan/1/base
gh/tugsbayasgalan/1/head
gh/tugsbayasgalan/1/orig
gh/tugsbayasgalan/2/base
gh/tugsbayasgalan/2/head
gh/tugsbayasgalan/2/orig
gh/xmfan/1/base
gh/xmfan/1/head
gh/xmfan/1/orig
gh/xuzhao9/1/orig
gh/zdevito/11/base
gh/zdevito/11/head
gh/zdevito/11/orig
gh/zdevito/12/base
gh/zdevito/12/head
gh/zdevito/12/orig
hoy/updateFBGEMM
i-am-silly
ignore-zips-and-pickles
improve-dlrm-utilization
improve-testpy-excludelist
install-right-numpy
isoneutral_mixing
janeyx99-patch-1
janeyx99-patch-2
jeanschmidt/rm_scale-config
jeanschmidt/try_fix_memory
juliagmt/test
krovatkin/attention_freeze
krovatkin/check_env
krovatkin/check_results
krovatkin/check_results2
krovatkin/ci_lazy
krovatkin/demucs_fix
krovatkin/env2
krovatkin/eval_train
krovatkin/fix_cuda
krovatkin/fix_timeout2
krovatkin/fix_yolov3
krovatkin/freeze_struct
krovatkin/freeze_suffix
krovatkin/freeze_v1
krovatkin/freeze_v2
krovatkin/fuser_flag
krovatkin/lp
krovatkin/ltc2main
krovatkin/no_grad
krovatkin/no_model_load
krovatkin/nvfuser
krovatkin/opt_for_inference
krovatkin/profile2
krovatkin/set_freeze
krovatkin/set_freeze2
krovatkin/set_mode
krovatkin/setup_custom_pytorch
krovatkin/spacy_0.1
krovatkin/stargan_freeze
krovatkin/timm
krovatkin/unet
krovatkin/update_spacy
krovatkin/wconstab/ltc
lazy_bench
learning_paint_super
lit-llama-canary
llama_fix
llama_v2_all
llama
local
lstm
main
make-issue-optim
malfet/pin-rapidfuzz-for-doctr
migrate-optim-ubs-to-a100-fr
minor_fix_task_for_demucs
minor-tweak-numpy-core
mobilenetv3_large
more-dense
mostafaelhoushi-patch-readme-1
move-optim-out-of-loop
move-pt2-exclusion
msaroufim/asoduaodub
msaroufim/authsd
msaroufim/cip
msaroufim/cm3train
msaroufim/fixsamdtype
msaroufim/hf_clip
msaroufim/llama2_70b
msaroufim/llamatrain
msaroufim/llamav2
msaroufim/sam
msaroufim/sam-medeval
msaroufim/sam-realeval
msaroufim/sdimage
msaroufim/sdxl
msaroufim/setup.py
msaroufim-patch-1
msaroufim-patch-2
msaroufim-patch-3
msaroufim-patch-4
msaroufim-patch-5
mvz-add-option
mvz-s-legacy-old
nanogpt_train
nikitha_removeDLRM
no-pt2-for-loop
opencv-python-compatibility
optim-access-BenchmarkModel
optim-access-e2eBM
optim-benchmarks
optim-benchmarks-new-runner
optim-benchmarks-output-dir-option
orionr-patch-1
perf_test_1.13_rc_cu117
perf_test_1.13_rc
perf-release-2.7
postagger
pr/bnlstm
pr/cudnn-noodling
pr/mlstm
pr/mlstm-baseline
print-runnable-repros
print-torch-version
rcnn
refactor-ub-utils
remove_score_yml
remove-asserts-in-yolov3
replace_runners_prefix_20240725165345
replace_runners_prefix_20240725195321
replace-pytorch-labs-20250812-205722
report-runtime-errors-to-issue
resurrect-ao-benchmark
revamp-nightly-docker-image
revert-339-gh/taylorrobie/callgrind_scribe
revert-2621
rm-torchrec-dep
robieta/benchmark_timeout
robieta/bisect_robustness
robieta/collect_profiles
robieta/run_verbose
robieta/set_affinity
run-optim-in-subprocesses
run-subset-ci
sam-is-dense
saves-time-debug
script_dlrm
script_tacotron
sdym/artifacts-v4
sdym/docstring
sdym/fix-gym
sdym/fixao
sdym/hf-yaml
sdym/newfixao
sdym/require_grad
sdym/sam-leak
sdym/test-ao
sdym/update-circleci
separate_out_compile_time
set_device_jit
skip-cpu-optim-ub
skip-deprecated-stable-diffusion-2
strictify-optimizer-get-set
submods
temp_fix
test-delet-me-later
tidy-up-optim
torchbench-pin-commit
try-fixing-optim-bms
tugsuu_export
tugsuu_export-v2
update_init_for_models
update_init_for_vgg16_maml_models
update_models_with_domain_task
update_score_yml
update-model-domain
update-opt-bms-for-clip-and-others
update-optim-accessors-in-e2e
update-transformers-with-dependabot
upload-always
use-docker
use-large-and-gate-nadam
use-right-percent
v1.0
v2.0
v3.0
wconstab/action
wconstab/archive
wconstab/archive-compare
wconstab/compare_torch_versions
wconstab/ddp_experiments
wconstab/ddp_experiments2
wconstab/debugltc
wconstab/deepspeed
wconstab/dynamic
wconstab/fields
wconstab/fix_speech_transformer
wconstab/fix
wconstab/fixes_for_lt
wconstab/localdist
wconstab/ltc_fix_attention
wconstab/ltc
wconstab/ltc-noopt
wconstab/ltc-nvfuser-nofallback
wconstab/mem
wconstab/metrics
wconstab/noyolo
wconstab/old-ltc
wconstab/plot
wconstab/plots
wconstab/remove_maskrcnn
wconstab/revert
wconstab/score
wconstab/separate_torchtext_install
wconstab/size
wconstab/transformer_variable
wconstab/ttest
wdvr/numpversion
wdvr/pin_numpy_update_huggingfacehub
wdvr-patch-1
wdvr-patch-2-1
wdvr-patch-2
weights_only_flip
weiwangmeta/a100_bc_utility
weiwangmeta/r1.13.1-A100-notaskset
weiwangmeta/r1.13.1-A100
weiwangmeta/r1.13.1
weiwangmeta/r2.0.0_T4
weiwangmeta/r2.0.0
wltc
wltc2
wwei9/fix-attention
wwei9/fix-lower-api
xmfan/fix_dashboard
xmfan/modded_nanogpt
xmfan/nanogpt_train
xmfan/oss_benchmark_script
xmfan/oss_benchmarks
xmfan/simple_gpt_manual_tp
xmfan/simple_gpt
xmfan/unify_sd
xmfan/yolov3_reduce_bs
xuanzhang816
xz9/add-backfill
xz9/add-fambench-rnnt
xz9/add-hstu-ragged
xz9/add-install
xz9/add-mirage
xz9/add-mlcommons-wmt
xz9/add-tritonbench-ci
xz9/add-wlm-trans
xz9/bump-transformers
xz9/cleanup
xz9/fix-k8s
xz9/fix-learningtopaint
xz9/fix-maml
xz9/fix-workflow
xz9/remove-timm
xz9/remove-tritonbench
xz9/test-rel-2
xz9/torch-pin
xz9/update-flash-attn
xz9/upgrade-cu128
zainr/disable-v3-nightly
dont format time
Krovatkin
committed
4 years ago
199354c8
Merge branch 'main' into wconstab/ltc
Jiewen Tan
committed
4 years ago
ac824d9f
[wconstab/ltc] Convert the json output from check_lazy.py to csv (#623)
alanwaketan
committed
4 years ago
Verified
50b56a1a
add timestamps to model launches (#619)
Krovatkin
committed
4 years ago
Verified
15d22c6c
Fix several bugs in the bisection script. (#628)
xuzhao9
committed
4 years ago
8b6833fa
Fix training hparams in Tacotron2 (#610)
aaronenyeshi
committed
4 years ago
9ade7265
Fix the train batch size of densenet121 (#579)
aaronenyeshi
committed
4 years ago
f0350463
Only load the module specified for gen-summary-md (#622)
aaronenyeshi
committed
4 years ago
70605527
Make SubprocessWorker surface errors in more cases (#608)
Taylor Robie
committed
4 years ago
45028272
[wconstab/ltc] Try running check_lazy.py on the CI (#616)
alanwaketan
committed
4 years ago
Verified
e563d354
Fix bisection workflow. (#621)
xuzhao9
committed
4 years ago
7db1e31f
Merge branch 'main' into wconstab/ltc
Jiewen Tan
committed
4 years ago
aa45d35d
Checkout the lazy_tensor_staging branch from the pytorch repo. (#615)
xuzhao9
committed
4 years ago
9f4b27c3
Remove pathlib install command as it is built-in 3.8. (#613)
xuzhao9
committed
4 years ago
ef847774
quick fix (#599)
Krovatkin
committed
4 years ago
Verified
2204b175
hack demucs (#601)
Krovatkin
committed
4 years ago
Verified
d5ca0c71
Use the correct batch size for vgg16 training (#583)
xuzhao9
committed
4 years ago
7eb31b1d
Added lazy tensor testing CI. (#600)
xuzhao9
committed
4 years ago
bb7256f2
Fixes squeezenet hierarchical batching. (#609)
xuzhao9
committed
4 years ago
b0a71ba7
Set the correct train batch size for squeezenet. (#574)
xuzhao9
committed
4 years ago
cee0ce46
Fix the runner temp interface. (#607)
xuzhao9
committed
4 years ago
c2e36b87
Remove V0 workflow. Use CUDA 11.3 for V1 workflow. (#596)
xuzhao9
committed
4 years ago
26229b6e
Use opacus version < 1.0 to mitigate BC-breaking API change (#605)
xuzhao9
committed
4 years ago
3e7bf736
Update the train_bs and eval_bs for super_slomo. (#595)
xuzhao9
committed
4 years ago
1dbe4bc7
Improve test.py EXCLUDELIST to disable by model, test, and device (#592)
aaronenyeshi
committed
4 years ago
5e898020
Fix train batch size for attention model. (#593)
xuzhao9
committed
4 years ago
488d2071
Fix train batch size for pytorch_struct. (#591)
xuzhao9
committed
4 years ago
121ea023
Fix background matting batch size. (#594)
xuzhao9
committed
4 years ago
5b9b32f9
Fix the train options of LearningToPaint (#581)
aaronenyeshi
committed
4 years ago
a5213f1c
Fix the train arch of DLRM (#580)
aaronenyeshi
committed
4 years ago
149410d8
Older