pytorch
b65ddef0 - for shared-memory handles, use an atomic counter, instead of potentially colliding random numbers (#60978)

Commit
3 years ago
for shared-memory handles, use an atomic counter, instead of potentially colliding random numbers (#60978) Summary: These handles, used for shared-memory tensors, can collide. E.g. see https://github.com/pytorch/pytorch/issues/60626#issuecomment-869919018 Pull Request resolved: https://github.com/pytorch/pytorch/pull/60978 Reviewed By: mruberry Differential Revision: D29479291 Pulled By: ezyang fbshipit-source-id: 408ef1817768f007ad4795b286482809ea43467c
Author
michael
Parents
Loading