[PyTorch] [Model Tracer] Use c10::Synchronized<T> for custom classes tracer (#74106)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74106
Currently, the custom classes tracer relies on the user to for advisory locking of the mutex when accessing shared data structures. This changes it to be mandatory locking using the `c10::Synchronized<T>` abstraction.
ghstack-source-id: 151471743
Test Plan:
Built model tracer successfully using:
```
buck build -c pt.disable_per_op_profiling=0 -c pt.enable_record_kernel_dtype=1 --show-output xplat/caffe2/fb/model_tracer:model_tracer
```
Reviewed By: malfet
Differential Revision: D34822917
fbshipit-source-id: 1203cbef342d341d11c1f8d71b0cf9d93d805d0d
(cherry picked from commit bd1df1cfd5069473daa10aa33484484d79d9ed0a)