Support Registering a Variable Length List of Builtin Modules for torch::deploy Builtin Libraries (#66021)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66021
A builtin library consists of a list of frozen modules and a list of builtin modules. For tensorrt, it's quite simple since we only have a single builtin module tensorrt.tensorrt. But it can be complex for libraries like numpy which contains multiple builtin modules (np.core._multiarray_umath, np.random.mtrand etc.) if we want to add it as a torch::deploy builtin. We enhance the macro that registers builtin libraries to accept a variable length of builtin modules. We can use this macro to register frozentorch, frozenpython, tensorrt for now and can also use it to register libraries like numpy later on.
The enhanced macro now looks as follows. Although we don't need to worry about back-compatibility for now, but this enhanced version is fully compatible with the previous version. The previous version is just a special case when the library contains no builtin modules.
```
REGISTER_TORCH_DEPLOY_BUILTIN(library_name_without_quote, frozen_modules_list,
builtin_module_name_1, builtin_module_init_function_1, ...,
builtin_module_name_N, builtin_module_init_function_N)
```
ghstack-source-id: 140007970
Test Plan:
1. Play around with interactive_embedded_interpreter.cpp to import torch._C, tensorrt.tensorrt etc inside the embedded interpreter.
2. Enhance test_builtin_registry.cpp
3. Run test_deploy.cpp and test_deploy_gpu.cpp
Reviewed By: suo
Differential Revision: D31349390
fbshipit-source-id: 70a1fcf660341180fc4d5195aed15ceb07c2bef7