fix inference_mode with torch.compile (#101219)
It looks like inference_mode wasn't playing well with functionalization.
If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor.
Hopefully fixes https://github.com/pytorch/pytorch/issues/101151
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101219
Approved by: https://github.com/albanD, https://github.com/ezyang