benchmark
d5801978 - Migrate fx2trt and torch_trt to use the backend option. (#1388)

Commit
3 years ago
Migrate fx2trt and torch_trt to use the backend option. (#1388) Summary: Use `create_backend` for fx2trt and torch_trt backends. Pull Request resolved: https://github.com/pytorch/benchmark/pull/1388 Test Plan: ``` $ python run.py resnet18 -d cuda Running eval method from resnet18 on cuda in eager mode with input batch size 256. GPU Time: 16.392 milliseconds CPU Total Wall Time: 16.421 milliseconds ``` ``` $ python run.py resnet18 -d cuda --backend fx2trt Running eval method from resnet18 on cuda in eager mode with input batch size 256. GPU Time: 5.455 milliseconds CPU Total Wall Time: 5.482 milliseconds Correctness: True ``` ``` $ python run.py resnet18 -d cuda --backend torch_trt Running eval method from resnet18 on cuda in eager mode with input batch size 256. GPU Time: 6.172 milliseconds CPU Total Wall Time: 6.202 milliseconds Correctness: True ``` Reviewed By: erichan1 Differential Revision: D42925991 Pulled By: xuzhao9 fbshipit-source-id: 1870c66549965807c73b98da85dda4e394b427b6
Author
Parents
Loading