[inductor] Improve handling of index_expr with floating point dtypes (#105021)
I found that the upsample bicubic lowering was generating this line
```python
ops.index_expr(0.244094488188976*x0, torch.float32)
```
which is not good because triton's `ops.index_expr` expects integer expressions and dtypes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105021
Approved by: https://github.com/lezcano