Allow GPU skip decorators to report the right number of GPUs required in (#43468)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43468
Closes https://github.com/pytorch/pytorch/issues/41378.
https://github.com/pytorch/pytorch/pull/41973 enhanced the skip decorators to
report the right no. of GPUs required, but this information was not passed to
the main process where the message is actually displayed. This PR uses a
`multiprocessing.Manager()` so that the dictionary modification is reflected
correctly in the main process.
ghstack-source-id: 110684228
Test Plan:
With this diff, we can run a test in such as in https://github.com/pytorch/pytorch/pull/42577 that requires 4 GPUs on a 2 GPU machine, and we get the expected message:
```
test_ddp_uneven_inputs_replicated_error (test_distributed.TestDistBackend) ... skipped 'Need at least 4 CUDA devices'
```
Reviewed By: mrshenli
Differential Revision: D23285790
fbshipit-source-id: ac32456ef3d0b1d8f1337a24dba9f342c736ca18