Fix Error with torch.flip() for cuda tensors when dims=() (#50325)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49982
The method flip_check_errors was being called in cuda file which had a condition to throw an exception for when dims size is <=0 changed that to <0 and added seperate condition for when equal to zero to return from the method... the return was needed because after this point the method was performing check expecting a non-zero size dims ...
Also removed the comment/condition written to point to the issue
mruberry kshitij12345 please review this once
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50325
Reviewed By: zhangguanheng66
Differential Revision: D25869559
Pulled By: mruberry
fbshipit-source-id: a831df9f602c60cadcf9f886ae001ad08b137481