[QNN Quant] Add preprocessing option to transpose graph inputs/outputs to channel-last (#19731)
### Description
Adds the optional parameters `inputs_to_make_channel_last` and
`outputs_to_make_channel_last` to the `qnn_preprocess_model()` function.
```python
"""
inputs_to_make_channel_last: List of graph input names to transpose to be "channel-last". For example,
if "input0" originally has the shape (N, C, D1, D2, ..., Dn), the resulting model will change input0's
shape to (N, D1, D2, ..., Dn, C) and add a transpose node after it.
Original:
input0 (N, C, D1, D2, ..., Dn) --> <Nodes>
Updated:
input0 (N, D1, D2, ..., Dn, C) --> Transpose --> input0_chanfirst (N, C, D1, D2, ..., Dn) --> <Nodes>
This can potentially improve inference latency for QDQ models running on QNN EP because the
additional transpose node may allow other transpose nodes inserted during ORT layout transformation
to cancel out.
outputs_to_make_channel_last: List of graph output names to transpose to be "channel-last". For example,
if "output0" originally has the shape (N, C, D1, D2, ..., Dn), the resulting model will change output0's
shape to (N, D1, D2, ..., Dn, C) and add a transpose node before it.
Original:
<Nodes> --> output0 (N, C, D1, D2, ..., Dn)
Updated:
<Nodes> --> output0_chanfirst (N, C, D1, D2, ..., Dn) --> Transpose --> output0 (N, D1, D2, ..., Dn, C)
This can potentially improve inference latency for QDQ models running on QNN EP because the
additional transpose node may allow other transpose nodes inserted during ORT layout transformation
to cancel out.
"""
```
**NOTE: If you use these options with the quantization scripts, you'll
have to make sure your data_reader feeds in transposed input data. It
won't happen automatically.**
### Motivation and Context
Native QNN operators use the channel-last data layout, but ONNX uses
channel-first. To bridge the gap, ORT's layout transformer inserts
transposes around layout-sensitive nodes and updates their domain to
indicate that they now operate on channel-last data. The transpose
optimizer is able to remove most of these inserted transposes, but not
all transposes can always be removed (i.e., some could remain at the
graph's inputs and outputs).
We've found that these extra transpose nodes can significantly degrade
inference latency on QNN EP. One workaround (provided by this PR) is to
add _additional_ transpose nodes at the graph inputs or outputs. These
additional nodes can often help the ORT transpose optimizer cancel out
any remaining transpose nodes, which significantly improves latency.
Additionally, it may make more sense for some kinds of inputs to just be
in channel-last form (e.g., images), avoiding the need to pre-transpose
of the input data before inference.
Example at the input:
```
Original:
input0 (N, C, D1, D2, ..., Dn) --> <Nodes>
Updated:
input0 (N, D1, D2, ..., Dn, C) --> Transpose --> input0_chanfirst (N, C, D1, D2, ..., Dn) --> <Nodes>
```
Example at the output:
```
Original:
<Nodes> --> output0 (N, C, D1, D2, ..., Dn)
Updated:
<Nodes> --> output0_chanfirst (N, C, D1, D2, ..., Dn) --> Transpose --> output0 (N, D1, D2, ..., Dn, C)
```