benchmark
b99caf81 - Add doctr recognition and detection models (#1315)

Commit
3 years ago
Add doctr recognition and detection models (#1315) Summary: This PR adds the [doctr](https://github.com/mindee/doctr) recognition and detection models to TorchBench. This is an inference only model so no train test is added. Pull Request resolved: https://github.com/pytorch/benchmark/pull/1315 Test Plan: ``` $ python run.py doctr_det_predictor -d cuda Running eval method from doctr_det_predictor on cuda in eager mode with input batch size 4. GPU Time: 47.578 milliseconds CPU Total Wall Time: 47.624 milliseconds $ python run.py doctr_det_predictor -d cuda --torchdynamo eager Running eval method from doctr_det_predictor on cuda in eager mode with input batch size 4. GPU Time: 53.928 milliseconds CPU Total Wall Time: 53.975 milliseconds Correctness: True $ python run.py doctr_det_predictor -d cuda --torchdynamo inductor self.output.compile_subgraph(self) File "/data/home/xzhao9/cluster/miniconda3/envs/py310/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 352, in compile_subgraph self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) File "/data/home/xzhao9/cluster/miniconda3/envs/py310/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 397, in compile_and_call_fx_graph compiled_fn = self.call_user_compiler(gm) File "/data/home/xzhao9/cluster/miniconda3/envs/py310/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 437, in call_user_compiler raise BackendCompilerFailed(self.compiler_fn, e) from e torch._dynamo.exc.BackendCompilerFailed: compile_fx raised RuntimeError: Inference tensors do not track version counter. While executing %self_conv1 : [#users=1] = call_module[target=self_conv1](args = (%x,), kwargs = {}) Original traceback: Module stack: {'self_conv1': <class 'torch.nn.modules.conv.Conv2d'>} File "/data/home/xzhao9/cluster/miniconda3/envs/py310/lib/python3.10/site-packages/torchvision/models/_utils.py", line 69, in <graph break in forward> x = module(x) ``` ``` $ python run.py doctr_reco_predictor -d cuda Running eval method from doctr_reco_predictor on cuda in eager mode with input batch size 64. GPU Time: 8.284 milliseconds CPU Total Wall Time: 8.312 milliseconds $ python run.py doctr_reco_predictor -d cuda --torchdynamo eager Running eval method from doctr_reco_predictor on cuda in eager mode with input batch size 64. GPU Time: 8.334 milliseconds CPU Total Wall Time: 8.373 milliseconds Correctness: True # torchinductor doesn't work on this model $ python run.py doctr_reco_predictor -d cuda --torchdynamo inductor File "/data/home/xzhao9/cluster/miniconda3/envs/py310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1447, in run super().run() File "/data/home/xzhao9/cluster/miniconda3/envs/py310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 349, in run and self.step() File "/data/home/xzhao9/cluster/miniconda3/envs/py310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 322, in step getattr(self, inst.opname)(inst) File "/data/home/xzhao9/cluster/miniconda3/envs/py310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 196, in wrapper self.output.compile_subgraph(self, reason=reason) File "/data/home/xzhao9/cluster/miniconda3/envs/py310/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 352, in compile_subgraph self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) File "/data/home/xzhao9/cluster/miniconda3/envs/py310/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 397, in compile_and_call_fx_graph compiled_fn = self.call_user_compiler(gm) File "/data/home/xzhao9/cluster/miniconda3/envs/py310/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 437, in call_user_compiler raise BackendCompilerFailed(self.compiler_fn, e) from e torch._dynamo.exc.BackendCompilerFailed: compile_fx raised RuntimeError: Inference tensors do not track version counter. ``` Fixes https://github.com/pytorch/benchmark/issues/1305 Reviewed By: FindHao Differential Revision: D41593159 Pulled By: xuzhao9 fbshipit-source-id: 5c186fbd40fdf47ab182d5df278af84c0e824ac9
Author
Parents
Loading