pytorch
ddea6980 - [PyTorch][JIT] Don't refcount Type singletons (#69579)

Commit
2 years ago
[PyTorch][JIT] Don't refcount Type singletons (#69579) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69579 This should help us avoid reference counting overhead on singleton Type subclasses without a major rewrite of the Type subsystem. ghstack-source-id: 146643993 Test Plan: Ran //caffe2/caffe2/fb/high_perf_models/pytorch/benchmark_framework_overheads:cpp_benchmark with arguments `--op empty -niter 40 --stressTestRecordFunction --captureRecordFunctionInputs` on devbig with turbo off. Before: ``` I1206 13:47:15.037441 1201670 bench.cpp:144] Mean 0.737675 I1206 13:47:15.037463 1201670 bench.cpp:145] Median 0.736725 I1206 13:47:15.037468 1201670 bench.cpp:146] Min 0.722897 I1206 13:47:15.037473 1201670 bench.cpp:147] stddev 0.00508187 I1206 13:47:15.037482 1201670 bench.cpp:148] stddev / mean 0.00688903 ``` After: ``` I1206 13:48:16.830123 1205612 bench.cpp:144] Mean 0.66988 I1206 13:48:16.830150 1205612 bench.cpp:145] Median 0.663956 I1206 13:48:16.830157 1205612 bench.cpp:146] Min 0.65986 I1206 13:48:16.830164 1205612 bench.cpp:147] stddev 0.0335928 I1206 13:48:16.830171 1205612 bench.cpp:148] stddev / mean 0.0501475 ``` Static runtime startup is also improved; for CMF local_ro, time to initialize a predictor went from 10.01s to 9.59s. (Note: I wish I had a production workload to demonstrate the advantage of this on. I tried ctr_mobile_feed local_ro net but it was neutral. Anything that manipulates types or List/Dict a lot might be promising.) Reviewed By: suo Differential Revision: D32923880 fbshipit-source-id: c82ed6689b3598e61047fbcb2149982173127ff0
Author
Parents
Loading