pytorch
0157b2d7 - Simple Custom Operator API, V0 (#98440)

Commit
1 year ago
Simple Custom Operator API, V0 (#98440) This PR introduces CustomOp, a wrapper around a dispatcher operator that allows users to define custom operators. It adds the skeleton for CustomOp and some very simple behavior: as of this PR: - one can create a CustomOp for an operator that does not have inplace or aliasing - give it CPU/CUDA and Meta implementations - and trace it into a graph via make_fx. The design follows https://docs.google.com/document/d/19Uc5OUCA187q9BZggJb70RT2ZoSTDoG5QQkJkZwd25M/edit Concretely, we implement the following things mentioned in the doc in this PR: - Entrypoint 1 (CustomOp.define, creating a new custom operator) - impl (to define device-specific code) and impl_meta (to define meta formulas) The goal for the short term is to get the code to a state where it can be trialed by the export folks. On top of this PR, the blockers are: - adding Entrypoint 3 (CustomOp.from_existing) - adding a way to do data-dependent shape formulas These will come in future PRs since this one is getting long. Things that will come in the longer-near-term (before 2.1): - adding the other entrypoints mentioned in the doc (2 & 3) - more safety checks and better error messages - support for views and mutation - support for defining autograd formulas - support for functionalization - making this API public (it's private right now). Test Plan: - added a new test case, TestCustomOp. It mostly tests a bunch of error cases. - added OpInfos for custom operators and hooked these up to test_proxy_tensor to test that they work with make_fx. These custom operators were based off of the ones in the autograd_function_db. Pull Request resolved: https://github.com/pytorch/pytorch/pull/98440 Approved by: https://github.com/ezyang
Author
Committer
Parents
Loading