Move XLA test job to 4xlarge (#92269)
Per the discussion with @clee2000 , I'm trying to look into XLA flaky failures. It's tricky because the runner crashes losing all the logs. The only guess I have comes from the test insight information of XLA test job, i.e. https://hud.pytorch.org/test/insights?jobName=linux-bionic-py3_7-clang8-xla%20%2F%20test%20(xla%2C%201%2C%201%2C%20linux.2xlarge)&workflowId=3919472559&jobId=10650151864
* Memory looks fine. It peaks at ~14GB when building, then dropping when testing
* CPU spikes at 100% at the end, which I suspect to be the reason causing the runner to crash
So the fix is to try to limit the test to nCPU - 1, so there is always one core left for the runner.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92269
Approved by: https://github.com/malfet