Add TensorRT-Model-Optimizer INT4 AWQ support in onnxruntime tools (#22390)
[TensorRT-Model-Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer)
have a implementation for INT4 AWQ. Adding the support in onnxruntime
tools to quantized the models with TensorRT-Model-Optimizer