pytorch
04043d68 - [package] fix storage serialization collision (#61806)

Commit
3 years ago
[package] fix storage serialization collision (#61806) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61806 Currently, if you do `save_pickle` on a ScriptModule, then `save_pickle` on a tensor, this would result in a `0.storage` tensor being written *twice* to the zip archive. This would cause weird bugs on the serializing side (this presented as a ASAN-detected heap buffer overflow because we tried to read more memory from a tensor than we actually had). Turns out this was because when we did: ``` self.storage_context = self.script_module_serializer.storage_context() ``` it returned a new copy of the storage context, so we weren't actually assigning unique names to tensors!! This PR fixes the issue by making `(De)SerializationStorageContext` non-copyable and fixing up the parts of the bindings that returned by copy. Differential Revision: D29748969 D29748969 Test Plan: Imported from OSS Reviewed By: Lilyjjo Pulled By: suo fbshipit-source-id: c2f89ab270e07e7a111fb35c545b5e07b804dc3c
Author
suo suo
Parents
Loading