pytorch
be3ad8c6 - [PyTorch][2/4] Support static dispatch with multiple backends (#75605)

Commit
2 years ago
[PyTorch][2/4] Support static dispatch with multiple backends (#75605) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/75605 Usecase: Milan models have multiple backends and need to use static dispatch to save on static initialization time and to hit native functions directly from the unboxed APIs. This change passes in List[BackendIndex] and adds ability to generate code for multiple static backends with 1 or 0 kernels ghstack-source-id: 154525738 (Note: this ignores all push blocking failures!) Test Plan: Builds lite_predictor_flatbuffer with multiple backends ``` buck build --config pt.enable_lightweight_dispatch=1 --config pt.static_dispatch_backend=CPU,QuantizedCPU,CompositeExplicitAutograd //xplat/caffe2/fb/lite_predictor:lite_predictor_flatbuffer ``` Reviewed By: larryliu0820 Differential Revision: D35510644 fbshipit-source-id: f985718ad066f8578b006b4759c4a3bd6caac176 (cherry picked from commit a6999729c8cc26c54b8d5684f6585d6c50d8d913)
Author
Committer
Parents
Loading