[iOS][coreml] Add CoreML memory observer (#76251)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76251
Add an observer to `PTMCoreMLExecutor` so we can inspect OOMs in production to help with T115554493.
The behaviour of the logger is as such:
1. Each time a model is compiled, there is a chance we publish all logs to QPL. This is determined by the randomly generated `_model_load_id` and `_sample_thresh`.
2. If we are publishing all logs, then every `_sample_every` inferences will be logged via QPL.
3. Every QPL log will collect memory metrics before and after model compilation/inference
4. If memory pressure is not normal (remaining mem < 400 MB) before or after compilation/inference, then that compilation/inference will be logged to QPL no matter what.
Test Plan:
We can test in pytorch playground and inspect the QPL logs through Flipper:
```
arc focus2 -b pp-ios -a ModelRunner -a //xplat/caffe2/c10:c10Apple -a //xplat/caffe2:torch_mobile_coreApple -a //xplat/caffe2/fb/dynamic_pytorch:dynamic_pytorch_implApple -a //xplat/caffe2:coreml_delegateApple -a ModelRunnerDevOps -a //xplat/caffe2:torch_mobile_all_opsApple -a coreml_memory_observer -a //xplat/perflogger:perfloggerApple -fd --force-with-wrong-xcode
```
To check results in Hive/Scuba, test in instagram:
```
arc focus2 -b igios-no-extensions -a //fbobjc/Apps/Instagram/AppLibraries/Core/QPL/IGPerformanceLogging:IGPerformanceLogging -a //xplat/caffe2/c10:c10Apple -a //xplat/caffe2:torch_mobile_coreApple -a //xplat/caffe2/fb/dynamic_pytorch:dynamic_pytorch_implApple -a //xplat/caffe2:coreml_delegateApple -a //xplat/caffe2:torch_mobile_all_opsApple -a //xplat/perflogger:perfloggerApple -a coreml_memory_observerApple -c pt.enable_qpl=1 --force-with-wrong-xcode
```
Note that we need to change `_sample_thresh` to ensure logs show up.
Reviewed By: kimishpatel
Differential Revision: D35511873
fbshipit-source-id: 59f2fa2d021178ceab1fcf5ee94b2f15ceca32ee
(cherry picked from commit 8b8af55410ea1231693ee980c80d8a749f5ad870)