pytorch
f3b15727 - fix op benchmark OOM issue (#29794)

Commit
5 years ago
fix op benchmark OOM issue (#29794) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29794 Before this diff, all tests of an operator are created at once before testing. Once an operator is benchmarked, the same process will move to the next operator and so on. The issue is that the number of tests of a single operator could be > 100 which can cause OOM issues. This diff avoids creating all the tests of an operator at once by using generators which creates/runs test one by one. Test Plan: ``` buck run //caffe2/benchmarks/operator_benchmark:benchmark_all_quantized_test -- --iterations 1 # ---------------------------------------- # PyTorch/Caffe2 Operator Micro-benchmarks # ---------------------------------------- # Tag : short # Benchmarking PyTorch: relu # Mode: Eager # Name: relu_dims(3,4,5)_contigFalse_inplaceFalse_dtypetorch.quint8 # Input: dims: (3, 4, 5), contig: False, inplace: False, dtype: torch.quint8 Forward Execution Time (us) : 52.493 # Benchmarking PyTorch: relu # Mode: Eager # Name: relu_dims(3,4,5)_contigFalse_inplaceFalse_dtypetorch.qint8 # Input: dims: (3, 4, 5), contig: False, inplace: False, dtype: torch.qint8 Forward Execution Time (us) : 44.945 ... Reviewed By: hl475 Differential Revision: D18500103 fbshipit-source-id: 747c0ad0d302177da04da36e112c67f154115b6e
Author
Parents
Loading