[Migraphx EP] Static int8 QDQ support (#17931)
### Description
<!-- Describe your changes. -->
Adding static int8 quantization support for MIGraphX Execution Provider
- Allows for parsing in calibration tables generated by Onnxruntime or
TensorRT's toolsets
- Add proper environment variables into the MIGraphX EP
- Update python API to include updating execution provider flags -> was
missing on python side
- Hook into MIGraphX's int8 quantitation and optimization of models
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Required so that we can get onnxruntime to pass in models while
leveraging the existing tooling for int8 static QDQ quantization.
First step in a series of PRs which will add further static quantization
on the operator level as MIGraphX releases further support.
These changes drew heavily from the tensorRT EP should allow for similar
functionality for GPU based (versus CPU) quantization of models before
an inference is performed.
---------
Co-authored-by: Ted Themistokleous <tthemist@amd.com>
Co-authored-by: Ted Themistokleous <tedthemistokleous@amd.com>