pytorch
c88da701 - [hpc][inference] enable cuda graph in engine holder (#66738)

Commit
3 years ago
[hpc][inference] enable cuda graph in engine holder (#66738) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66738 added a field `max_batch_size` to TRTModule, which would be later used to determine how big the engine holder would need to pad the input to Reviewed By: 842974287 Differential Revision: D31286509 fbshipit-source-id: be5c6d4ad9c87ca0842679dc507b187275d4e8dc
Author
Bangsheng Tang
Parents
Loading