operator_compile_check v0 (#103198)
This PR adds `operator_compile_check` (pls bikeshed name), a gradcheck-like
API to test if a custom operator is supported by torch.compile.
The API is scoped to check only that the interaction between the
operator and torch.compile works (e.g. it is not going to include
gradcheck). Concretely, it currently checks the following things:
- schema correctness
- make_fx traceable (static shapes)
- aot_autograd correctness (static shapes)
- torch.compile correctness, with and without inductor (static shapes)
- make_fx traceable (dynamic shapes)
- aot_autograd correctness (dynamic shapes)
- torch.compile correctness, with and without inductor (dynamic shapes)
Test Plan:
We test a bunch of error cases, including many failure modes that have tripped
us up in the past, and assert that they (mostly) have nice error messages:
- incorrect schema (mutates)
- incorrect schema (has a view)
- missing abstract impl
- incorrect abstract impl
- missing functionalization kernel
- autograd registered at CPU/CUDA keys
- operator is not traceable
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103198
Approved by: https://github.com/bdhirsh, https://github.com/soulitzer