Allow marking multiple unstable configs of the same job name (#109185)
This is a bug that has stayed for a surprisingly long period of time (my fault). When there are multiple unstable configurations (`inductor`, `inductor_huggingface`, `inductor_huggingface_dynamic`) of the same job (`inductor / cuda12.1-py3.10-gcc9-sm86`), only the first one was marked as unstable. The for loop returned too early and missed the other twos even though they were also marked as unstable, for example https://ossci-metrics.s3.amazonaws.com/unstable-jobs.json
### Testing
* Add an unit test
* CI run https://github.com/pytorch/pytorch/actions/runs/6169798353 shows that the configs below are all marked as unstable:
* https://github.com/pytorch/pytorch/issues/107079
* https://github.com/pytorch/pytorch/issues/109153
* https://github.com/pytorch/pytorch/issues/109154
* Manually run the script to verify the test matrix output:
```
python .github/scripts/filter_test_configs.py \
--workflow "inductor" \
--job-name "cuda12.1-py3.10-gcc9-sm86 / build," \
--test-matrix "{ include: [
{ config: "inductor", shard: 1, num_shards: 1, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "inductor_huggingface", shard: 1, num_shards: 1, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "inductor_timm", shard: 1, num_shards: 2, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "inductor_timm", shard: 2, num_shards: 2, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "inductor_torchbench", shard: 1, num_shards: 1, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "inductor_huggingface_dynamic", shard: 1, num_shards: 1, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "inductor_timm_dynamic", shard: 1, num_shards: 2, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "inductor_timm_dynamic", shard: 2, num_shards: 2, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "inductor_torchbench_dynamic", shard: 1, num_shards: 1, runner: "linux.g5.4xlarge.nvidia.gpu" },
{ config: "inductor_distributed", shard: 1, num_shards: 1, runner: "linux.g5.12xlarge.nvidia.gpu" },
]}
" \
--pr-number "" \
--tag "" \
--event-name "push" \
--schedule "" \
--branch ""
::set-output name=keep-going::False
::set-output name=is-unstable::False
::set-output name=reenabled-issues::
::set-output name=test-matrix::{"include": [{"config": "inductor", "shard": 1, "num_shards": 1, "runner": "linux.g5.4xlarge.nvidia.gpu", "unstable": "unstable"}, {"config": "inductor_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.g5.4xlarge.nvidia.gpu", "unstable": "unstable"}, {"config": "inductor_timm", "shard": 1, "num_shards": 2, "runner": "linux.g5.4xlarge.nvidia.gpu"}, {"config": "inductor_timm", "shard": 2, "num_shards": 2, "runner": "linux.g5.4xlarge.nvidia.gpu"}, {"config": "inductor_torchbench", "shard": 1, "num_shards": 1, "runner": "linux.g5.4xlarge.nvidia.gpu"}, {"config": "inductor_huggingface_dynamic", "shard": 1, "num_shards": 1, "runner": "linux.g5.4xlarge.nvidia.gpu", "unstable": "unstable"}, {"config": "inductor_timm_dynamic", "shard": 1, "num_shards": 2, "runner": "linux.g5.4xlarge.nvidia.gpu"}, {"config": "inductor_timm_dynamic", "shard": 2, "num_shards": 2, "runner": "linux.g5.4xlarge.nvidia.gpu"}, {"config": "inductor_torchbench_dynamic", "shard": 1, "num_shards": 1, "runner": "linux.g5.4xlarge.nvidia.gpu"}, {"config": "inductor_distributed", "shard": 1, "num_shards": 1, "runner": "linux.g5.12xlarge.nvidia.gpu"}]}
::set-output name=is-test-matrix-empty::False
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109185
Approved by: https://github.com/clee2000