[WebNN] Better int64 integration (#23831)
This PR adds some workarounds to enable int64 support for some WebNN
backends which don't support int64 data type.
- Do not fallback ops that are specifically due to the int64 limitation.
- Convert all int64 initializer and input values to int32 and handle
potential overflow errors.
- Register all int64 model inputs and outputs as int32 ml-tensor.
- Handle ONNX ops that need inputs or outputs conversion between int64
and int32. e.g. ArgMax, ArgMin, Cast, etc.
- Convert int64 output data back to int32.
- Disallow int64 outputs as 'ml-tensor' preferredOutputLocation.
Fixed #21401