Run only one pytest parametrization when generating optest (#108936)
Richard, I'm curious to see what you think of this. I'm trying to use optest on the torchvision test suite, and after hacking up pytest support in https://github.com/pytorch/pytorch/pull/108929 I noticed that this was 5x'ing the test time... for no good reason.
* torchvision nms tests before optests: 60 passed, 4 skipped, 1206 deselected in 11.47s
* after optests: 300 passed, 20 skipped, 1206 deselected in 49.85s
It's no good reason because torchvision has parametrized the tests to get a spread of various random generation, but for checking schema or fake tensor, we don't actually need to test for different values.
This PR hacks up the codegen to replace pytest parametrize markers so that, instead of sampling many values, we sample only one value if you mark it with `opcheck_only_one`. There's a carveout for device parametrization, where we always run all those variants.
With this PR:
* reduced optests: 88 passed, 4 skipped, 1206 deselected in 13.89s
Companion torchvision PR which uses this at https://github.com/pytorch/vision/pull/7961
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108936
Approved by: https://github.com/zou3519