pytorch
bd3c63ae - [PyTorch Edge] Move torch::jit::mobile::_export_operator_list() from serialization/export_module.cpp to mobile/import.cpp (#56044)

Commit
3 years ago
[PyTorch Edge] Move torch::jit::mobile::_export_operator_list() from serialization/export_module.cpp to mobile/import.cpp (#56044) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56044 We want to be able to drop the dependence of full-jit deps in the auto-generated unit tests for 2 reasons: 1. Running bloaty on the auto-generated unit tests should be somewhat representative of the actual size. 2. The runtime environment of the auto-generated unit tests should be as close to the production environment as possible to ensure that we are running the tests in a production-like runtime. Due to the dependece on full-jit, we aren't there yet. For the auto-generated tests, we probably don't need to depend on `_export_operator_list()` evetually, but for now we do since it is used to decide whether the model being run is a Metal GPU model or a CPU model, and gates whether the test runs that model or not. Eventually, we can stop doing this in the test and do it in the codegen from PTM-CLI instead (by fetching the operators from that tool, and writing out to the BUCK file which backend(s) this model is targeting). However, that will take some time to land, so in the spirit of expediency, this change is being proposed. Discussed this offline with iseeyuan ghstack-source-id: 126656877 Test Plan: Build + BSB. Reviewed By: iseeyuan Differential Revision: D27694781 fbshipit-source-id: f31a2dfd40803c02f4fd19c45a3cc6fb9bdf9697
Author
Parents
Loading