onnxruntime
a4513188 - Refactor TRT EP error message with details (#17007)

Commit
2 years ago
Refactor TRT EP error message with details (#17007) If users use `trt_profile_min_shapes`, `trt_profile_max_shapes` and `trt_profile_opt_shapes`, they need to provide all the dynamic shape input with associated shape profiles. In the case of the main graph is partitioned into TRT/CUDA subgraphs, if the input of the subgraph is also dynamic shape, users need to provide its shape profiles as well. User might not notice, so TRT EP will tell them which input shape profiles need to be provided. New warning message is : ``` Traceback (most recent call last): File "/home/azureuser/disk2/debug/optional_inputs.py", line 218, in <module> test_optional_input_dynamic(trt_profile=True, optional=True) File "/home/azureuser/disk2/debug/optional_inputs.py", line 195, in test_optional_input_dynamic session = ort.InferenceSession( File "/home/azureuser/anaconda3/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/azureuser/anaconda3/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 471, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) onnxruntime.capi.onnxruntime_pybind11_state.EPFail: [ONNXRuntimeError] : 11 : EP_FAIL : User needs to provide all the dynamic shape inputs with associated profiles if they want to explicitly set profiles through provider options. Please note that main graph could be partitioned into TRT/CUDA/CPU subgraphs, in this case, user also needs to provide shape profiles for the TRT subgraph's input if it's dynamic shape input. Following input(s) has no associated shape profiles provided: x1 ``` Please see this github issue: https://github.com/microsoft/onnxruntime/issues/16600
Author
Parents
Loading