pytorch
652707ab - Don't cache model specs within PTMCoreMLCompiler (#85136)

Commit
2 years ago
Don't cache model specs within PTMCoreMLCompiler (#85136) Summary: It turns out disk cache space is more limited than I realized - Instagram starts evicting cached items at 10mb. We don't actually need to cache the model specs, once the model is compiled all we need is the compiled model. With this diff, after model compilation succeeds we cleanup the model specs from disk. Test Plan: Delete instagram from device to ensure an empty cache, build, launch camera, open a MCS or Segmentation effect, confirm it loads and works correctly. Restart the app and launch again, to confirm it can load the compiled model from cache as well. Differential Revision: D39562009 Pull Request resolved: https://github.com/pytorch/pytorch/pull/85136 Approved by: https://github.com/kimishpatel
Author
Committer
Parents
Loading