benchmark
ca731c26 - First PR to add the correctness checking code to eval tests (#762)

Commit
3 years ago
First PR to add the correctness checking code to eval tests (#762) Summary: This PR prepares adding the correctness checking code to eval tests: 1. Each `eval()` function now returns `Tuple[torch.Tensor]`, i.e., the inference result 2. Add a test to check 1) is true for every model 3. change `run_sweep.py` to prepare for the correctness checking A follow-up PR is https://github.com/pytorch/benchmark/pull/763, which adds the actual correctness calculation code. Pull Request resolved: https://github.com/pytorch/benchmark/pull/762 Reviewed By: wushirong Differential Revision: D34438166 Pulled By: xuzhao9 fbshipit-source-id: b876795485d5942727c3f3dad6ec44eef3250678
Author
Parents
Loading