pytorch
abe41aee - [ONNX] Support custom Op with onnx-script local function (#86906)

Commit
3 years ago
[ONNX] Support custom Op with onnx-script local function (#86906) Extend `register_custom_op` to support onnx-script local function. The FunctionProto from onnx-script is represented by custom op and inserted into ModelProto for op execution. NOTE: I did experiments on >2GB case of a simple model with large initializers: ```python import torch class Net(torch.nn.Module): def __init__(self, B, C): super().__init__() self.layer_norm = torch.nn.LayerNorm((B, C), eps=1e-3) def forward(self, x): return self.layer_norm(x) N, B, C = 3, 25000, 25000 model = Net(B, C) x = torch.randn(N, B, C) torch.onnx.export(model, x, "large_model.onnx", opset_version=12) ``` And it turns out we won't get model_bytes > 2GB after `_export_onnx` pybind cpp function, as we split initializer in external files in that function, and have serialization before return the model bytes, which protobuf is not allowed to be larger than 2GB at any circumstances. The test cases can be found in the next PR #86907 . Pull Request resolved: https://github.com/pytorch/pytorch/pull/86906 Approved by: https://github.com/justinchuby, https://github.com/BowenBao
Author
Committer
Parents
Loading