fix lift() when using funcionalization with fake tensors (#82008)
If you're running `make_fx(functionalize(...))`, and call the `torch.tensor()` constructor, we'd expect the thing that gets returned to python to be a `FunctionalTensorWrapper(FakeTensor)`.
That requires functionalization's implementation for `lift()` to properly redispatch, get the returned tensor, and wrap it. Fixed here.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82008
Approved by: https://github.com/ezyang