pytorch
f2689b1e - Make ideep honor `torch.set_num_thread` changes (#53871)

Commit
3 years ago
Make ideep honor `torch.set_num_thread` changes (#53871) Summary: When compiled with OpenMP support `ideep`'s computational_cache would cache max number of OpenMP workers This number could be wrong after `torch.set_num_threads` call, so clean it after the call. Fixes https://github.com/pytorch/pytorch/issues/53565 Pull Request resolved: https://github.com/pytorch/pytorch/pull/53871 Reviewed By: albanD Differential Revision: D27003265 Pulled By: malfet fbshipit-source-id: 1d84c23070eafb3d444e09590d64f97f99ae9d36
Author
Parents
Loading