[pytorch] include all overloads for OSS custom build
Summary:
For mobile custom build, we only generate code for ops that are used by
specific models to reduce binary size.
There multiple places where we apply the op filtering:
- generated_unboxing_wrappers_*.cpp
- autograd/VariableType*.cpp
- c10 op registration (in aten/gen.py)
For c10 op registration, we filter by the main op name - all overloads
that match the main op name part will be kept.
For generated_unboxing_wrappers_*, we filter by the full op name - only
those having exactly the same overload name will be kept.
This PR changes generated_unboxing_wrappers_* and autograd/VariableType*.cpp
codegen to also filter by the main op name.
The reasons are:
- keeping all overloads can have better backward compatibility;
- generated_unboxing_wrappers_* are relatively small as it only contains
thin wrappers for root ops.
- generated_unboxing_wrappers_* will be replaced by c10 op registration
soon anyway.
- autograd/VariableType*.cpp are not included in OSS build.
Why it offers better backward compatibility? #40737 is an example:
It introduced a new `_convolution` overload and renamed the original one
to `_convolution.deprecated`. Before this PR, the model prepared by the
old version PyTorch won't be able to run on the custom mobile build
generated on the PR because `_convolution.deprecated` won't be kept in
the custom build due to full op name matching policy. By relaxing it to
partial matching policy, the mobile custom build CI on the PR can pass.
Will test the size impact for FB production build before landing.
Differential Revision: D22809564
Test Plan: Imported from OSS
Reviewed By: iseeyuan
Pulled By: ljk53
fbshipit-source-id: e2fc017da31f38b9430cc2113f33e6d21a0eaf0b