pytorch
e7ed0172 - [Static Runtime] Fix MemoryPlanner dtor crash in debug mode (#79942)

Commit
2 years ago
[Static Runtime] Fix MemoryPlanner dtor crash in debug mode (#79942) Summary: Memory planner destruction was hitting [this assertion](https://www.internalfb.com/code/fbsource/[f8baf8a0bab462c860d2eb7491a4e3fb40d2907a]/fbcode/caffe2/c10/util/intrusive_ptr.h?lines=117) in debug mode for a few models. Here's what was going on: 1) The set of unmanaged `IValue`s acquires one or more owning refs of a managed `StorageImpl` 2) Then, one or more tensors in that storage group have their `StorageImpl` swapped out during execution 3) During `deallocateManagedTensors`, we swap the correct `StorageImpl` back in, [calling `unsafe_adapt_non_heap_allocated` again and resetting the refcount](https://www.internalfb.com/code/fbsource/[f8baf8a0bab462c860d2eb7491a4e3fb40d2907a]/fbcode/caffe2/torch/csrc/jit/runtime/static/memory_planner.cpp?lines=446-452) 4) The unmanaged `IValues` are deallocated, decrementing the refcount into the danger zone. So, we just have to make sure that unmanaged `IValue`s are destructed before we deallocate the managed tensors. Test Plan: CI Differential Revision: D37303728 Pull Request resolved: https://github.com/pytorch/pytorch/pull/79942 Approved by: https://github.com/tenpercent
Author
Mike Iovine
Committer
Parents
Loading