[TensorRT EP] Fallback to CUDA EP if it's explicitly assigned (#17535)
### Description
* TensorRT EP can fall back to CUDA EP if it's explicitly assigned
* MIGraphX can fall back to ROCM if it's explicitly assigned
Test cases:
| When user specifies providers= | self._fallback_providers= |
| ------------------------------------------------------------ |
------------------------------------------------- |
| ["TensorrtExecutionProvider", "CUDAExecutionProvider"] |
["CUDAExecutionProvider", "CPUExecutionProvider"] |
| ["TensorrtExecutionProvider",("CUDAExecutionProvider", cuda_options)]
| ["CUDAExecutionProvider", "CPUExecutionProvider"] |
| ["TensorrtExecutionProvider"] | ["CPUExecutionProvider"] |
| [("TensorrtExecutionProvider", trt_options)] |
["CPUExecutionProvider"] |
| [("TensorrtExecutionProvider", trt_options), ("CUDAExecutionProvider",
cuda_options)] | ["CUDAExecutionProvider", "CPUExecutionProvider"] |
| ["TensorrtExecutionProvider", "CPUExecutionProvider"] |
["CPUExecutionProvider"] |
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Apply comments of https://github.com/microsoft/onnxruntime/issues/17394
and unify the logic to [MIGraphX, ROCM]