pytorch
17d0b7f5 - [pt2][inductor]global autotuning cache (#94922)

Commit
1 year ago
[pt2][inductor]global autotuning cache (#94922) Summary: this diff adds logic to handle a global autotuning cache, stored in json format at config.global_cache_path. what is changing from `DiskCache`: * `DiskCache` is renamed to `PersistentCache` * the local cache is now stored as a single file in json format, located at `/tmp/torchinductor_{$USER}/local_cache`. the file contains a dictionary structure like `local_cache[name][inputs][choice]` where `name` is the type of operation, like `addmm`, `inputs` is the repr of the inputs, and `choice` is the hash of a `ChoiceCaller`. the stored value is the benchmark time for that `ChoiceCaller`. * a global cache is added, initially stored at `fbcode/caffe2/torch/_inductor/global_cache`, with almost identical format as the local cache. since the global cache exists over different machines, there is an additional `dinfo` field, such that `global_cache[dinfo] = local_cache` (at least structure wise, there is no guarantee that the global cache and local cache share the same values). `dinfo` is just a repr of the cuda device properties. * the autotuner will prioritize the global cache, and return values from there first, before looking in the local cache * the autotuner will look in both the global cache and the local cache even when `max_autotune=False`, but will still only generate values if `max_autotune=True`. * the autotuner will log global cache hits and misses to a scuba table (inductor_autotuning_cache) which will be used to update the global cache at regular intervals Test Plan: D43285472 Differential Revision: D42785435 Pull Request resolved: https://github.com/pytorch/pytorch/pull/94922 Approved by: https://github.com/jansel
Author
Committer
Parents
Loading