pytorch
5e993e6c - [fx2trt] Make TRTInterpreter don't need concrete tensor as arg (#59948)

Commit
3 years ago
[fx2trt] Make TRTInterpreter don't need concrete tensor as arg (#59948) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/59948 1. We have two Interpreters. One for vanilla op and one for acc op. Some of the logic between them are similar and in this diff we extract out the similar logic to a Base Interpreter. This makes any future general feature change could benefit both Interpreters. 2. Make TRT Interpreter not depending on concrete tensor arg. We will use `InputTensorSpec` to create necessary inputs for acc tracer. 3. Add unittests for acc op converter. Test Plan: ``` buck test mode/opt caffe2/torch/fb/fx2trt:test_linear buck test mode/opt caffe2/torch/fb/fx2trt:test_batchnorm buck test mode/opt caffe2/torch/fb/fx2trt:test_convolution buck test mode/opt caffe2/torch/fb/fx2trt:test_reshape buck test mode/opt caffe2/torch/fb/fx2trt:test_relu buck test mode/opt caffe2/torch/fb/fx2trt:test_add buck test mode/opt caffe2/torch/fb/fx2trt:test_maxpool ``` Reviewed By: jackm321 Differential Revision: D28749682 fbshipit-source-id: 830d845aede7203f6e56eb1c4e6776af197a0fc3
Author
Parents
Loading