Fix lop1p lowering bug (#64724)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64724
`1` will introduce a int tensor instead of float tensor, which doesn't work well with downstream operators (elementwise). Error would be like
```
[TensorRT] WARNING: IElementWiseLayer with inputs (Unnamed Layer* 1) [Unary]_output and (Unnamed Layer* 2) [Constant]_output: first input has type Float but second input has type Int32.
```
Changing the constant to be float type fixes this.
Reviewed By: 842974287
Differential Revision: D30796959
fbshipit-source-id: 0538e4dd960df9ce87a2d4cafe8f1a0c061b6bad