Fix Cast node naming collisions and opset 10 Resize in float16 conversion (#27469)
## Summary
- Fix Cast node naming collisions in `convert_float_to_float16` when
nodes have empty names (common in PyTorch exports)
- Fix `ALWAYS_FLOAT_INPUTS` for opset 10 Resize where scales at index 1
was unprotected
- Add dedicated test suite for float16 conversion (`test_float16.py`, 8
tests)
## Motivation
Fixes #14827
When `convert_float_to_float16` processes models with unnamed nodes
(empty `node.name`, very common in PyTorch/TensorFlow-exported ONNX
models), the generated Cast node names collide. For example, multiple
Resize nodes all produce Cast nodes named `"_input_cast_2"` and output
tensors named `"_input_cast_2"`, corrupting the graph with duplicate
names.
Additionally, the `ALWAYS_FLOAT_INPUTS` dict only protected Resize
scales at index 2 (opset 11+ layout: `[X, roi, scales, sizes]`), but
opset 10 Resize has scales at index 1 (`[X, scales]`), leaving it
unprotected.
## Changes
**`onnxruntime/python/tools/transformers/float16.py`** (11 lines
changed):
- Use unique tensor names (`input_name`/`output`) as the base for
generated Cast node and output names, instead of potentially-empty
`node.name`
- Add index 1 to `ALWAYS_FLOAT_INPUTS["Resize"]` to protect opset 10
scales
- Fix misleading comment ("change current node's input name" → "output
name")
**`onnxruntime/test/python/transformers/test_float16.py`** (new file, 8
tests):
- `test_resize_opset11_cast_naming_unique` — multiple unnamed Resize
nodes produce unique Cast names
- `test_resize_opset11_scales_initializer_stays_fp32` — scales
initializer preserved as float32
- `test_resize_opset10_scales_initializer_stays_fp32` — opset 10 scales
protected at index 1
- `test_resize_opset10_multiple_unnamed_unique_names` — opset 10 naming
uniqueness
- `test_blocked_node_cast_naming_unique` — blocked op nodes (Upsample)
also get unique Cast names
- `test_resize_with_op_block_list` — Resize in op_block_list still
produces unique names
- `test_data_input_converted_to_fp16` — data tensor correctly converts
to fp16
- `test_force_fp16_initializers` — force flag overrides protection
## Test Plan
- All 8 new tests pass locally (`python -m unittest
test_float16.TestFloat16Conversion -v`)
- Existing `test_gpt2_past_fp16` test passes (no regression in existing
float16 behavior)
- `ruff check` passes on both files