[QNN EP] Fix Batch Normalization Op Builder (#17981)
### Description
There is a gap between onnx’s definition of batch normalization and
QNN’s.
According to the formula:
onnx: `(X - input_mean) / sqrt(input_var + epsilon) * scale + B`
QNN: `X * weight + bias`
We can then deduce that:
`weight = scale / sqrt(var + epsilon)`
`bias = B – (mean * scale / sqrt(var + epsilon))`
We must calculate the weight and bias, and their quantization parameters
for QNN in QNN EP.
Therefore, `scale`, `B`, `input_mean`, and `input_var` must be static
(`initializer`).
Implementation:
Firstly, dequantize `scale`, `B`, `input_mean`, and `input_var` to
floating point.
Second, calculate `weight` and `bias`, and their quantization
parameters.
Finally, quantize `weight` and `bias`, and add them into `TensorWrapper`
### Motivation and Context
Fix QnnHTPBackendTests.BatchNorm1D and QnnHTPBackendTests.BatchNorm2D
failures