[dynamo] Avoid recompilation when the PyTorch function accepts scalars (#108162)
Before, it would create a 0D tensor with the input, which would incur in
a guard and specialisation.
It's not clear whether the guard and specialisation is the right behaviour
when we create 0D tensors, but that's a story for another day.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108162
Approved by: https://github.com/ev-br, https://github.com/peterbell10