Call lift_fresh after scalar_to_tensor in composite derivative formulas (#81609)
`scalar_to_tensor` is not dispatched and thus there is no interposition point for modes to ensure that the resulting tensor is appropriately wrapped. `lift_fresh` introduces this interposition point. This prevents FakeTensorMode from erroring. I can't make these wrapped numbers because there is some downstream logic on convolution backwards that expects these inputs to be honest to goodness tensors for conjugation.
This fixes test_aot_autograd_exhaustive_special_ndtr_cpu_float32
in https://github.com/pytorch/functorch/pull/935
See https://github.com/pytorch/pytorch/issues/81608 for more discussion
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81609
Approved by: https://github.com/soulitzer