[ONNX] Support aminmax
Support exporting `torch.aminmax`.
One of the use case is exporting fake quantized models. The observer calls https://github.com/pytorch/pytorch/blob/1601a4dc9f689db3912190dcc8fdc70814896292/torch/ao/quantization/observer.py#L447.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75714
Approved by: https://github.com/garymm