pytorch
6aec1eba - [aten] Make aten::flatten call native::reshape (#50859)

Commit
4 years ago
[aten] Make aten::flatten call native::reshape (#50859) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50859 Test Plan: Unit test: ``` buck test //caffe2/test:torch ``` Benchmark: ``` MKL_NUM_THREADS=1 OMP_NUM_THREADS=1 numactl -m 0 -C 13 \ ./buck-out/opt/gen/caffe2/caffe2/fb/predictor/ptvsc2_predictor_bench \ --scripted_model=/home/hlu/ads/adindexer/adindexer_ctr_mobilefeed/pt/merge_v2/traced_precomputation.pt \ --pt_inputs=/home/hlu/ads/adindexer/adindexer_ctr_mobilefeed/pt/merge_v2/container_precomputation_bs20.pt \ --iters=10000 --warmup_iters=10000 --num_threads=1 --pt_enable_static_runtime=true \ --pt_cleanup_activations=true --pt_enable_out_variant=true --do_profile=true ``` Reduces the total time spent on flatten from 1.22% to 0.97% (net 0.25% reduction). ``` Before: Static runtime ms per iter: 0.0725054. Iters per second: 13792.1 0.000857179 ms. 1.21862%. aten::flatten (1 nodes) After: Static runtime ms per iter: 0.0720371. Iters per second: 13881.7 0.000686155 ms. 0.97151%. aten::flatten (1 nodes) ``` Reviewed By: ajyu Differential Revision: D25986759 fbshipit-source-id: dc0f542c56a688d331d349845b78084577970476
Author
Hao Lu
Parents
Loading