[ONNX] Support None in fx.args as torchlib inputs (#108708)
Prior to this PR, if None is returned from intermediate nodes, it will crashes the export because None is not expected to be passed into `_fill_tensor_shape_type`, and raise beartype roar. The function fills in shape and type to TorchScriptTensor according to its info from FX graph.
This is discovered after https://github.com/microsoft/onnxscript/pull/1043 is supported. The op specifically generates None in one of its inputs, but the only output from it being consumed is the first one (not None).
Reference test from a TorchBench model:
```python
def test_nanogpt(self):
import sys
sys.path.append("/home/titaiwang")
from nanoGPT.model import GPT, GPTConfig
# Load the model
kwargs = {
"block_size": 256,
"vocab_size": 8096, # GPT-2 vocab_size of 50257, padded up to nearest multiple of 64 for efficiency
"n_layer": 2,
"n_head": 2,
"n_embd": 128,
"dropout": 0.0,
"bias": False, # True: bias in Linears and LayerNorms, like GPT-2. False: a bit better and faster
}
config = GPTConfig(**kwargs)
with torch.backends.cuda.sdp_kernel(
enable_flash=True, enable_mem_efficient=True
):
model = GPT(config)
print("Done loading model")
inputs = torch.arange(128).view(2, 64)
targets = torch.arange(128).view(2, 64)
self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
model,
(inputs,),
input_kwargs={
"targets": targets,
},
verbose=True,
)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108708
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi