inductor: fix customer op _convolution_pointwise_.binary functional error at AOTAutograd (#94581)
This is another try(first is https://github.com/pytorch/pytorch/pull/94172) to fix the warning message when running inductor CPU path:
```
l. Known situations this can occur are inference mode only compilation involving resize_ or prims (!schema.hasAnyAliasInfo() INTERNAL ASSERT FAILED); if your situation looks different please file a bug to PyTorch.
Traceback (most recent call last):
File "/home/xiaobing/pytorch-offical/torch/_functorch/aot_autograd.py", line 1377, in aot_wrapper_dedupe
fw_metadata, _out = run_functionalized_fw_and_collect_metadata(flat_fn)(
File "/home/xiaobing/pytorch-offical/torch/_functorch/aot_autograd.py", line 578, in inner
flat_f_outs = f(*flat_f_args)
File "/home/xiaobing/pytorch-offical/torch/_functorch/aot_autograd.py", line 2455, in functional_call
out = Interpreter(mod).run(*args[params_len:], **kwargs)
File "/home/xiaobing/pytorch-offical/torch/fx/interpreter.py", line 136, in run
self.env[node] = self.run_node(node)
File "/home/xiaobing/pytorch-offical/torch/fx/interpreter.py", line 177, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/home/xiaobing/pytorch-offical/torch/fx/interpreter.py", line 294, in call_module
return submod(*args, **kwargs)
File "/home/xiaobing/pytorch-offical/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xiaobing/pytorch-offical/torch/_inductor/mkldnn.py", line 344, in forward
return self._conv_forward(input, other, self.weight, self.bias)
File "/home/xiaobing/pytorch-offical/torch/_inductor/mkldnn.py", line 327, in _conv_forward
return torch.ops.mkldnn._convolution_pointwise_(
File "/home/xiaobing/pytorch-offical/torch/_ops.py", line 499, in __call__
return self._op(*args, **kwargs or {})
File "/home/xiaobing/pytorch-offical/torch/_inductor/overrides.py", line 38, in __torch_function__
return func(*args, **kwargs)
File "/home/xiaobing/pytorch-offical/torch/_ops.py", line 499, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: !schema.hasAnyAliasInfo() INTERNAL ASSERT FAILED at "/home/xiaobing/pytorch-offical/aten/src/ATen/FunctionalizeFallbackKernel.cpp":32, please report a bug to PyTorch. mutating and aliasing ops should all have codegen'd kernels
While executing %self_layer2_0_downsample_0 : [#users=2] = call_module[target=self_layer2_0_downsample_0](args = (%self_layer1_1_conv2, %self_layer2_0_conv2), kwargs = {})
Original traceback:
File "/home/xiaobing/vision/torchvision/models/resnet.py", line 100, in forward
identity = self.downsample(x)
| File "/home/xiaobing/vision/torchvision/models/resnet.py", line 274, in _forward_impl
x = self.layer2(x)
| File "/home/xiaobing/vision/torchvision/models/resnet.py", line 285, in forward
return self._forward_impl(x)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94581
Approved by: https://github.com/jgong5, https://github.com/jansel