Make xnnpack based convs thread safe (#84602)
Summary:
For convolution xnnpack uses indirection buffer. This needs setup if input
dimensions change. If we run the same model from multiple threads each
supplying different input sized tensor, then there is a race condition where,
indirection buffer might be in use by one thread, while being reset by another.
This diff adds a lock to each conv object so as to serialize the execution and
prevent such race conditions. When uncontended, it should not have perf impact.
Test Plan: TestConvolution2dMultiThreaded
Differential Revision: D39288298
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84602
Approved by: https://github.com/digantdesai