[inductor] use aten.kernel.OVERLOAD_NAME instead of aten.kernel in python wrapper (#103576)
Summary:
When we call an overload packet (e.g. torch.ops.aten.ge), there's some C++ code (from TorchScript) that determines which overload to use. There's sometimes ambiguity as to which op should be used. Therefore, for python we should use the specific overload name if we know it.
Specifically, the issue was with ge. We had a test (test_lerp_cuda from test_torchinductor.py) that eventually got lowered to code like this:
```
torch.ops.aten.ge(torch.tensor(70000.), 0.5)
```
This can either match torch.ops.aten.ge.Scalar (the intended overload), which will return torch.tensor(True); or it can match torch.ops.aten.ge.float (a TorchScript overload), which will return `True`. The decision of which to use depends on the order in which the operators are registered. Internally, depending on the build config (opt vs. dev-nosan), the operator registration order could differ. In opt mode, the torchscript overload would appear first and therefore would get called first, and cause the inductor program to fail.
Differential Revision: D46712744
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103576
Approved by: https://github.com/jgong5, https://github.com/desertfire