Go
Home
Pricing
FAQ
Install
Home
Pricing
FAQ
Install
Login
via GitHub
pytorch/benchmark
Pull Requests
Commits
camyllh/fix_gym_errors_on_timm_yaml
0.1
088_torchbench_torchao_updates
290-add-dump-ir
290-add-dump-ir2
Chillee-patch-1
DALLE2_pytorch
DRAFT-of-splitting-optim-benchmark
H-Huang-patch-1
T5
ZainRizvi-patch-1
aaron-add-check-device-test
aaron-add-metadata
aaron-cleanup-run
add-4wd-attention_is_all_you_need
add-4wd-fambench_dlrm
add-b200-tests
add-delta-optim
add-flux
add-license-1
add-llama_v2_7b_8h
add-moco-train-bs-comments
add-new-models-to-dense
add-optim-benchmarks
add-stable-diffusion-to-dense
adnanaziz
agunapal/add_beanmachine_pplbench
alanwaketan/ltc
alanwaketan/timm_nfnet
allow-testpy-to-skip-blacklist
always-delete-model
angelayi/update_hf_pin
another-optim-fix
another-tweak-for-maml_omniglot
atalman/r2.0.0
atalman/r2.1.0
atalman/r2.1.1
atalman/r2.2.x
atalman/r2.2.0
atalman/r2.4.0
atalman/r2.4.1
atalman/2.0.1_T4
atalman-patch-1
atalman-patch-2
atalman-patch-3
atalman-patch-4
atalman-patch-5
atencmake
bark1
bert_seq_fix
bf/register-squashed-normal-type
bf/yolo
big-oopsie
calculate_score_per_config
camyllh/fix_gym_errors_on_timm_yaml
chuanqiw/add_gptj
chuanqiw/add_iters_param
chuanqiw/add_throughput_metric
chuanqiw/cpu_userbm_fix
chuanqiw/cpu_userbm_launcher
chuanqiw/cpu_userbm_metrics
chuanqiw/inductor_quant
clean-up-gat
clean-workspace
cleanup
cleanup-torchbench-workflows
correct-learningtopaint-model
cpp_bench_tmp
cron_job
davidberard98/ddp_dynamo_patches
davidberard98/ddp_dynamo_reuse_alloc
davidberard98/ddp_experiments
davidberard98/ddp-nov07
davidberard98/ddp-nov08
davidberard98/fsdp-nov18
davidberard98/reuse_allocation_ddp
davidberard98/reuse_allocation_ddp2
davidberard98/reuse_allocation_draft
davidberard98/skip-dynamo-optimizer
ddp-fixes
default-factory
delete-unused-files
desertfire/rotary-embedding-torch
deshard
disable-optim-pt2
do-not-keep-around-the-task-spec
driazati/awsdefault
driazati/rds
enable_pdt_for_pytorch_struct
enable_script_pdt_api
enable_script_pdt_in_gen_torchvision_benchmarks.py
erichan1/add-bert-distributed-none
erichan1/add-distributed-e2e-t5
erichan1/add-distributed-readme
erichan1/bert-fsdp
erichan1/fix-deepspeed
erichan1/t5-fsdp
errors-live-in-benchmark
even-higher-threshold
exclude-cpu-reporting-and-up-delta
exclude-pt2-on-most-models
exclude-yolov3-nadam
explicit-BM-API
export-D58301337
export-D61229059
export-D61392607
export-D61809602
export-D61819148
export-D61929352
export-D62404528
export-D64208508
export-D71412041
fastNLP
findhao/add_option_to_disable_metrics
findhao/add-api-coverage-test
findhao/add-dcgm-embedded-mode
findhao/add-memusage-nvml
findhao/add-phlippe-densenet
findhao/add-resnet
findhao/addtorchexpert
findhao/enable-mem-peak
findhao/fix-bug-dcgm
findhao/fix-bug-for-multigpus
findhao/fix-dalle2
findhao/fix-dcgm-compatibility
findhao/fix-dependency
findhao/fix-if-check
findhao/fix-mem-bug
findhao/fix-pynvml
findhao/fix-pyproject
findhao/fix-typo
findhao/opbench15
findhao/opbench16
findhao/operatorbench1
findhao/operatorbench2
findhao/operatorbench3
findhao/operatorbench4
findhao/operatorbench6
findhao/operatorbench7
findhao/operatorbench8
findhao/operatorbench9
findhao/operatorbench10
findhao/operatorbench11
findhao/operatorbench13
findhao/print-break-graphs
findhao/remove-fvcore
findhao/reorg-args
findhao/rocm-test
findhao/test
findhao/update_cudnn_config
findhao/update_model_task
findhao/update-citation
findhao/update-numba
fix_dense_ctor
fix_lazy_bench1
fix_time
fix_torchtext_UDPOS
fix_torchtext_dataset_import
fix_torchtext_imports
fix_torchtext_imports_1
fix-1888
fix-1942
fix-OOMs-YAY
fix-check-device-get-module
fix-densenet-to-paper
fix-dlrm-to-paper
fix-optim-regression-detector
fix-pr-tests
fix-some-bugs
fix-tacotron2-train-to-code
fixSync
fixup-T164911652-main
fixup-T198312900-main
gelu
generate_spec_config
get-rid-of-dead-reference
gh/HDCharles/1/base
gh/HDCharles/1/head
gh/HDCharles/1/orig
gh/HDCharles/2/base
gh/HDCharles/2/head
gh/HDCharles/2/orig
gh/HDCharles/3/base
gh/HDCharles/3/head
gh/HDCharles/3/orig
gh/HDCharles/4/base
gh/HDCharles/4/head
gh/HDCharles/4/orig
gh/HDCharles/5/base
gh/HDCharles/5/head
gh/HDCharles/5/orig
gh/HDCharles/6/base
gh/HDCharles/6/head
gh/HDCharles/6/orig
gh/davidberard98/17/base
gh/davidberard98/17/head
gh/davidberard98/17/orig
gh/davidberard98/31/base
gh/davidberard98/31/orig
gh/davidberard98/32/base
gh/davidberard98/32/head
gh/davidberard98/32/orig
gh/davidberard98/33/base
gh/davidberard98/33/head
gh/davidberard98/33/orig
gh/huydhn/1/base
gh/huydhn/1/head
gh/jamesjwu/1/base
gh/jamesjwu/1/head
gh/jamesjwu/1/orig
gh/jamesjwu/2/base
gh/jamesjwu/2/head
gh/jamesjwu/2/orig
gh/robieta/error_handling
gh/taylorrobie/broken_cases
gh/taylorrobie/callgrind_scribe
gh/taylorrobie/install_logging
gh/taylorrobie/v1_isolation
gh/tugsbayasgalan/1/base
gh/tugsbayasgalan/1/head
gh/tugsbayasgalan/1/orig
gh/tugsbayasgalan/2/base
gh/tugsbayasgalan/2/head
gh/tugsbayasgalan/2/orig
gh/xmfan/1/base
gh/xmfan/1/head
gh/xmfan/1/orig
gh/xuzhao9/1/orig
gh/zdevito/11/base
gh/zdevito/11/head
gh/zdevito/11/orig
gh/zdevito/12/base
gh/zdevito/12/head
gh/zdevito/12/orig
hoy/updateFBGEMM
i-am-silly
ignore-zips-and-pickles
improve-dlrm-utilization
improve-testpy-excludelist
install-right-numpy
isoneutral_mixing
janeyx99-patch-1
janeyx99-patch-2
jeanschmidt/rm_scale-config
jeanschmidt/try_fix_memory
juliagmt/test
krovatkin/attention_freeze
krovatkin/check_env
krovatkin/check_results
krovatkin/check_results2
krovatkin/ci_lazy
krovatkin/demucs_fix
krovatkin/env2
krovatkin/eval_train
krovatkin/fix_cuda
krovatkin/fix_timeout2
krovatkin/fix_yolov3
krovatkin/freeze_struct
krovatkin/freeze_suffix
krovatkin/freeze_v1
krovatkin/freeze_v2
krovatkin/fuser_flag
krovatkin/lp
krovatkin/ltc2main
krovatkin/no_grad
krovatkin/no_model_load
krovatkin/nvfuser
krovatkin/opt_for_inference
krovatkin/profile2
krovatkin/set_freeze
krovatkin/set_freeze2
krovatkin/set_mode
krovatkin/setup_custom_pytorch
krovatkin/spacy_0.1
krovatkin/stargan_freeze
krovatkin/timm
krovatkin/unet
krovatkin/update_spacy
krovatkin/wconstab/ltc
lazy_bench
learning_paint_super
lit-llama-canary
llama_fix
llama_v2_all
llama
local
lstm
main
make-issue-optim
malfet/pin-rapidfuzz-for-doctr
migrate-optim-ubs-to-a100-fr
minor_fix_task_for_demucs
minor-tweak-numpy-core
mobilenetv3_large
more-dense
mostafaelhoushi-patch-readme-1
move-optim-out-of-loop
move-pt2-exclusion
msaroufim/asoduaodub
msaroufim/authsd
msaroufim/cip
msaroufim/cm3train
msaroufim/fixsamdtype
msaroufim/hf_clip
msaroufim/llama2_70b
msaroufim/llamatrain
msaroufim/llamav2
msaroufim/sam
msaroufim/sam-medeval
msaroufim/sam-realeval
msaroufim/sdimage
msaroufim/sdxl
msaroufim/setup.py
msaroufim-patch-1
msaroufim-patch-2
msaroufim-patch-3
msaroufim-patch-4
msaroufim-patch-5
mvz-add-option
mvz-s-legacy-old
nanogpt_train
nikitha_removeDLRM
no-pt2-for-loop
opencv-python-compatibility
optim-access-BenchmarkModel
optim-access-e2eBM
optim-benchmarks
optim-benchmarks-new-runner
optim-benchmarks-output-dir-option
orionr-patch-1
perf_test_1.13_rc_cu117
perf_test_1.13_rc
perf-release-2.7
postagger
pr/bnlstm
pr/cudnn-noodling
pr/mlstm
pr/mlstm-baseline
print-runnable-repros
print-torch-version
rcnn
refactor-ub-utils
remove_score_yml
remove-asserts-in-yolov3
replace_runners_prefix_20240725165345
replace_runners_prefix_20240725195321
replace-pytorch-labs-20250812-205722
report-runtime-errors-to-issue
resurrect-ao-benchmark
revamp-nightly-docker-image
revert-339-gh/taylorrobie/callgrind_scribe
revert-2621
rm-torchrec-dep
robieta/benchmark_timeout
robieta/bisect_robustness
robieta/collect_profiles
robieta/run_verbose
robieta/set_affinity
run-optim-in-subprocesses
run-subset-ci
sam-is-dense
saves-time-debug
script_dlrm
script_tacotron
sdym/artifacts-v4
sdym/docstring
sdym/fix-gym
sdym/fixao
sdym/hf-yaml
sdym/newfixao
sdym/require_grad
sdym/sam-leak
sdym/test-ao
sdym/update-circleci
separate_out_compile_time
set_device_jit
skip-cpu-optim-ub
skip-deprecated-stable-diffusion-2
strictify-optimizer-get-set
submods
temp_fix
test-delet-me-later
tidy-up-optim
torchbench-pin-commit
try-fixing-optim-bms
tugsuu_export
tugsuu_export-v2
update_init_for_models
update_init_for_vgg16_maml_models
update_models_with_domain_task
update_score_yml
update-model-domain
update-opt-bms-for-clip-and-others
update-optim-accessors-in-e2e
update-transformers-with-dependabot
upload-always
use-docker
use-large-and-gate-nadam
use-right-percent
v1.0
v2.0
v3.0
wconstab/action
wconstab/archive
wconstab/archive-compare
wconstab/compare_torch_versions
wconstab/ddp_experiments
wconstab/ddp_experiments2
wconstab/debugltc
wconstab/deepspeed
wconstab/dynamic
wconstab/fields
wconstab/fix_speech_transformer
wconstab/fix
wconstab/fixes_for_lt
wconstab/localdist
wconstab/ltc_fix_attention
wconstab/ltc
wconstab/ltc-noopt
wconstab/ltc-nvfuser-nofallback
wconstab/mem
wconstab/metrics
wconstab/noyolo
wconstab/old-ltc
wconstab/plot
wconstab/plots
wconstab/remove_maskrcnn
wconstab/revert
wconstab/score
wconstab/separate_torchtext_install
wconstab/size
wconstab/transformer_variable
wconstab/ttest
wdvr/numpversion
wdvr/pin_numpy_update_huggingfacehub
wdvr-patch-1
wdvr-patch-2-1
wdvr-patch-2
weights_only_flip
weiwangmeta/a100_bc_utility
weiwangmeta/r1.13.1-A100-notaskset
weiwangmeta/r1.13.1-A100
weiwangmeta/r1.13.1
weiwangmeta/r2.0.0_T4
weiwangmeta/r2.0.0
wltc
wltc2
wwei9/fix-attention
wwei9/fix-lower-api
xmfan/fix_dashboard
xmfan/modded_nanogpt
xmfan/nanogpt_train
xmfan/oss_benchmark_script
xmfan/oss_benchmarks
xmfan/simple_gpt_manual_tp
xmfan/simple_gpt
xmfan/unify_sd
xmfan/yolov3_reduce_bs
xuanzhang816
xz9/add-backfill
xz9/add-fambench-rnnt
xz9/add-hstu-ragged
xz9/add-install
xz9/add-mirage
xz9/add-mlcommons-wmt
xz9/add-tritonbench-ci
xz9/add-wlm-trans
xz9/bump-transformers
xz9/cleanup
xz9/fix-k8s
xz9/fix-learningtopaint
xz9/fix-maml
xz9/fix-workflow
xz9/remove-timm
xz9/remove-tritonbench
xz9/test-rel-2
xz9/torch-pin
xz9/update-flash-attn
xz9/upgrade-cu128
zainr/disable-v3-nightly
update dependency
Camyll
committed
254 days ago
27aad00a
add missing import change
Camyll
committed
255 days ago
9ec44cf4
add missing import change
Camyll
committed
255 days ago
33a725cf
add requirements fix
Camyll
committed
255 days ago
7beaa914
add gymnasium to requirements
Camyll
committed
255 days ago
2339bdbc
test if this helps
Camyll
committed
255 days ago
db28d57a
reconstruct functions decorated in the compiled region properly (#150645)
williamwen42
committed
257 days ago
6eb17f1e
Add pgo remote get/put timings to dynamo_compile (#150322)
masnesral
committed
258 days ago
98b06f00
Fix mis-calculated memory compression ratio (#150695)
desertfire
committed
258 days ago
de0cb2f6
Fix `dict.items()` return type (#150112)
generatedunixname499836121
committed
261 days ago
4dbceeab
Make CompileEventLogger more defensive w.r.t to AOTAutogradCache and FXGraphCache (#150423)
jamesjwu
committed
261 days ago
df6622bb
Update how peak memory is measured (#150534)
desertfire
committed
262 days ago
942979ef
Always trace into tensor subclass `__torch_function__` (#149792)
StrongerXi
committed
262 days ago
e5c91641
add dynamo disable reasons to codebase (#150440)
williamwen42
committed
263 days ago
5186143e
add APIs to determine a class is a namedtuple or PyStructSequence (#113257)
generatedunixname499836121
committed
263 days ago
8fae8cca
Revert D71711852
Dark Knight
committed
264 days ago
40a841b6
always set deterministic for xpu accuracy test (#149028)
generatedunixname499836121
committed
265 days ago
77276109
Fix _Waitcounter decorator and dd backward pass wait counter (#150235)
ppanchalia
committed
266 days ago
cf97e29e
Fix `is_compile_supported()` when `device_type` contains device index (#147837)
generatedunixname499836121
committed
268 days ago
d71863d0
Add --output-iter-metrics flag to cpu userbenchmark scripts (#2600)
murste01
committed
270 days ago
49c3f18f
Fix handling of setattr with some tensor attributes (#149791)
StrongerXi
committed
271 days ago
a2b6092c
use torch.compile ca API for benchmarks (#149647)
xmfan
committed
272 days ago
7481a407
Unify `cuBLASLt` workspaces with `cuBLAS` workspaces (#145130)
generatedunixname499836121
committed
273 days ago
90e73750
Add python version to dynamo_compile table (#149419)
masnesral
committed
276 days ago
10a7be34
Switch off inference mode during compilation (#149321)
anijain2305
committed
278 days ago
2c5bc4ad
fix dynamic_shapes spec for moco (#148772) (#2601)
pianpwk
committed
278 days ago
d1b2abbf
fix two accuracy regression (#149172)
shunting314
committed
279 days ago
50e2f744
NotImplementedError: Model's DEFAULT_TRAIN_BSIZE is not implemented. (#2563)
ostrowskimarcin
committed
282 days ago
4dc08945
Set compile_id in the CachingAutotuner during compilation so we have it for dynamo_timed logging (#148693)
masnesral
committed
283 days ago
ee4aa8ba
add log for skip reasons
BoyuanFeng
committed
285 days ago
c271568d
Older