Fix race in CUDA initialization (#25788)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25788
Previously, I thought that _lazy_init held the GIL throughout initialization, so
I could write the code in a single-threaded manner. This is not true; it
releases the GIL at various points, which make it possible for another thread to
race with initialization.
The correct fix is to add locking for the initialization section, so other
threads wait until the first thread finishes initializing before being let
in. There is some subtlety with how to handle lazy calls, which will call
_lazy_init reentrantly; this is handled using TLS that lets you know if you
are the initializing thread (and therefore reentrant calls are OK.)
Fixes #16559
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Differential Revision: D17366348
Pulled By: ezyang
fbshipit-source-id: 99b982709323e2370d03c127c46d87be97495916