Use the newer g5.12xlarge instead of g3.16xlarge for multigpu tests (#105759)
Both have 4 GPUs. This is an attempt to mitigate the runner issue with `g3.16xlarge` where it starts to crash a lot recently https://github.com/pytorch/pytorch/issues/105721. So, let's see if switching to a newer runner type helps.
The job also finishes slightly faster in ~120m https://github.com/pytorch/pytorch/actions/runs/5625775414/job/15246453229 v.s. ~140m as before https://github.com/pytorch/pytorch/actions/runs/5625238650/job/15244823174
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105759
Approved by: https://github.com/atalman