pytorch
86ce2950 - remove xla-specific stuff from codegen (minus CPU fallback) (#58064)

Commit
3 years ago
remove xla-specific stuff from codegen (minus CPU fallback) (#58064) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58064 **Summary** This PR tries to remove all xla-specific logic from the codegen except for two places: - renaming the `aten_xla_type.h/cpp` template files; Going to do that in a separate PR just to make the diff easier to understand - CPU fallback logic (everything in `aten_xla_type_default.h/cpp` and `gen_external_aten_fallbacks.py`). I'm trying to kill all of that logic in a subsequent PR by making the CPU fallback a boxed kernel, so it felt unnecessary to go through it all and remove the xla references here. **Notable changes** The xla codegen includes some custom logging in each kernel wrapper, so I added a few new knobs to the external yaml, that we now test. I have a corresponding [xla-side PR](https://github.com/pytorch/xla/pull/2944) with the new yaml changes, which look like this: ``` per_op_log: XLA_FN_TRACK(3) per_argument_log: TF_VLOG(3) cpu_fallback_counter: XLA_COUNTER("aten::{name}", 1) extra_headers: > #include <tensorflow/compiler/xla/xla_client/debug_macros.h> #include <tensorflow/compiler/xla/xla_client/metrics.h> #include <tensorflow/compiler/xla/xla_client/tf_logging.h> #include <torch_xla/csrc/function_call_tracker.h> #include <torch_xla/csrc/aten_xla_type.h> #include <torch_xla/csrc/aten_xla_type_default.h> ``` Test Plan: Imported from OSS Reviewed By: anjali411 Differential Revision: D28711095 Pulled By: bdhirsh fbshipit-source-id: 90a48440f2e865a948184e2fb167ea240ada47bb
Author
Parents
Loading