[WebGPU] Correct MatMul bias input indexing in `MatMul::ComputeInternal` (#28475)
### Description
WebGPU MatMul was wiring bias into the wrong slot in the `inputs` vector
when bias is present. The vector is pre-sized to 3 and downstream code
reads bias from `inputs[2]`, but the previous logic appended bias,
placing it at index 3.
- **Root cause**
- `inputs` was initialized with size `3` for bias cases, then bias was
added via `push_back`, creating a 4th element.
- **Change**
- Replaced append logic with direct assignment to the expected index.
- **Effect**
- Bias is now consistently passed at `inputs[2]`, matching
`ComputeMatMul()` and Intel-path expectations.
```cpp
std::vector<const Tensor*> inputs(has_bias ? 3 : 2);
inputs[0] = a;
inputs[1] = b;
if (has_bias) {
const auto* bias = context.Input(2);
inputs[2] = bias; // was: inputs.push_back(bias);
}
```
### Motivation and Context
This addresses incorrect bias propagation in WebGPU MatMul
(`onnxruntime/core/providers/webgpu/math/matmul.cc`) caused by index
mismatch between producer and consumer logic. The fix is intentionally
surgical: one behavioral line change in the affected code path.
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: guschmue <22941064+guschmue@users.noreply.github.com>