pytorch
72dd9b24 - [inductor] Make some improvements to FX graph caching (#117888)

Commit
247 days ago
[inductor] Make some improvements to FX graph caching (#117888) Summary: This is in preparation to enable FX graph caching by default. First fix some bugs uncovered by running all unit tests under `test/inductor/`. I'll enable in a separate diff in case we need to revert. Summary of changes: * Turn off caching for tests that require a compilation, e.g., when checking that a relevant counter was incremented * Bypass caching when we see mkldnn tensors as constants (they currently don't serialize, so we can't save to disk) * Include various global settings that could affect compilation in the cache key calculation. * Handle a few config settings that break key calculation. * Handle code paths where no ShapeEnv is available (the cache impl requires a shape env as part of handling guards) * Skip caching when freezing is enabled (Freezing can embed constants that wouldn't be static across runs). * Fix the clear() method to not throw when the cache /tmp dir doesn't exist Test Plan: Ran all tests under `test/inductor/` twice with TORCHINDUCTOR_FX_GRAPH_CACHE=1 to exercise any test that might be affected by caching. Pull Request resolved: https://github.com/pytorch/pytorch/pull/117888 Approved by: https://github.com/eellison
Author
Committer
Parents
Loading