benchmark
98077fb0 - Enable TIMM pretrained model caching on shared HF cache (#174596)

Commit
61 days ago
Enable TIMM pretrained model caching on shared HF cache (#174596) Summary: TIMM benchmarks fail in CI because TRANSFORMERS_OFFLINE=1 blocks HF Hub downloads, and when TIMM cache is missing those benchmarks will fail. - Add --download-only flag to common.py to pre-download all models - Use pin-specific cache directory and stamp file to make sure TIMM cache is ready Authored with Claude. X-link: https://github.com/pytorch/pytorch/pull/174596 Approved by: https://github.com/huydhn Reviewed By: seemethere Differential Revision: D93316487 fbshipit-source-id: cc4661e22a0a5ef9f9f31f133c7975c4f39868ae
Author
Committer
Parents
Loading