Fix binary size in schema inference (#26878)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26878
Before, for each function signature used in one or more ops, there's a template instantiation that creates the FunctionSchema object for it. As we've seen in the past, all these vector<> constructors in the FunctionSchema object take quite some binary size.
With this PR, we now create an intermediate constexpr std::array that has minimal binary size and can be embedded into the executable, then at runtime we will run a small piece of code that constructs the vector<>'s from it.
This reduces libtorch.so binary size by 800kb
ghstack-source-id: 90842811
Test Plan: measure libtorch.so size
Differential Revision: D17597752
fbshipit-source-id: 53442b565a7747c0d0384b2e3b845729c3daddfd