pytorch
bfa67264 - [1/N] Nnapi backend execute and compile (#62272)

Commit
3 years ago
[1/N] Nnapi backend execute and compile (#62272) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/62272 Added Android NNAPI delegate implementation of runtime initialization (compilation) and execution. The delegate's preprocess step was [previously implemented](https://github.com/pytorch/pytorch/pull/62225). Now, the reset of the delegate, which implements client-side execution, is added. **nnapi_backend_lib.cpp**: Implementation of delegate's compile and execute. `execute()` is essentially a C++ implementation of [`NnapiModule`](https://github.com/pytorch/pytorch/blob/master/torch/backends/_nnapi/prepare.py), which wraps an NNAPI Compilation and handles preparation of weights, inputs, and outputs. - Any steps that can be done before execution are moved to `compile()`. - `init()` cannot be moved to `compile()` because it requires real inputs for dynamic shaping. - `shape_compute_module` cannot currently be deserialized in `compile()`, since mobile::Module has no IValue conversion. - Processed arguments that are modified by `init()` must be kept as member variables. Any other processed arguments are passed through a dictionary, `handles`. **nnapi_bind.cpp & nnapi_bind.h**: Created a header file for `nnapi_bind.cpp`, so that it's NnapiCompilation class can be used by `nnapi_backend_lib.cpp`. **test_backend_nnapi.py**: Enabled execution testing. ghstack-source-id: 135432844 Test Plan: Imported from OSS Tested on devserver. 1. Load and unpack a special devserver build of NNAPI: `jf download GICWmAAzUR0eo20TAPasVts8ObhobsIXAAAz --file "nnapi-host-linux.tar.xz"` 2. `export LIBNEURALNETWORKS_PATH=/path/to/libneuralnetworks.so` 3. Run unittests: `python test/test_jit.py TestNnapiBackend` and `python test/test_nnapi.py` TODO: test with lite interpreter runtime Reviewed By: raziel, iseeyuan Differential Revision: D29944873 fbshipit-source-id: 48967d873e79ef2cce9bcba2aeea3c52f7a18c07
Author
Parents
Loading