Fix test failure in TestCudaMultiGPU.test_cuda_device_memory_allocated (#105501)
The test
https://github.com/pytorch/pytorch/blob/f508d3564c8a472ba2f74878dbdf67f09eaae2d3/test/test_cuda_multigpu.py#L1282-L1290
Torch cuda caching allocator may cache the allocation and cause the "new_alloc" being the same as the "old_alloc".
```python
self.assertGreater(memory_allocated(0), current_alloc[0])
```
I suggest that we use `assertGreaterEqual` instead of `assertGreater` in the test.
Individually running only this test does not make it fail but running it together with other tests from the same test module will make it fail.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105501
Approved by: https://github.com/zou3519