pytorch
c377a859 - Add `nonzero_static()` op to pytorch to unblock export (#97417)

Commit
1 year ago
Add `nonzero_static()` op to pytorch to unblock export (#97417) Summary: Add new experimental python op (`torch.nonzero_static`) for export. There is NO cuda impl included in this PR Example: Say input tensor is `x = torch.tensor([[1, 0], [3, 2]])` call regular `nonzero()` on x will give you a tensor `tensor([[0, 0], [1, 0], [1, 1])` call `nonzero_static(x, size=4)` on x will give you a tensor `tensor([[0, 0], [1, 0], [1, 1], [fill_value, fill_value])` (padded) call `nonzero_static(x, size=2)` on x will give you a tensor `tensor([[0, 0], [1, 0])` (truncated) Test Plan: **Unit Tests** ``` buck test @mode/dev-nosan //caffe2/test:test_dynamo -- 'caffe2/test:test_dynamo - test_export.py::ExportTests::test_export_with_nonzero_static' -- 'caffe2/test:test_dynamo - test_misc.py::MiscTests::test_nonzero_static' ``` **PT2 Export with `nonzero_static()`** Example of `GraphModule` in the exported graph ``` def forward(self, x): arg0, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) nonzero_static_default = torch.ops.aten.nonzero_static.default(arg0, size = 4); arg0 = None return pytree.tree_unflatten([nonzero_static_default], self._out_spec) ``` Differential Revision: D44324808 Pull Request resolved: https://github.com/pytorch/pytorch/pull/97417 Approved by: https://github.com/ezyang
Author
Committer
Parents
  • aten/src/ATen/native
    • File
      TensorAdvancedIndexing.cpp
    • File
      native_functions.yaml
  • test
    • dynamo
      • File
        test_export.py
      • File
        test_misc.py
    • expect
      • HasDecompTest.test_has_decomposition.expect
    • File
      test_mps.py
  • torch
    • File
      _meta_registrations.py
    • File
      _tensor_docs.py
    • File
      overrides.py
    • testing/_internal
      • File
        common_methods_invocations.py
Loading