Make resize_as_ generic, so XLA works. (#26809)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26809
resize_as_ shouldn't do multiple dispatch on its second argument. Because it
currently has per CPU/CUDA dispatch, however, it will do proper dispatch on all
arguments. Bad!
There is only a very minor downside to this patch which is we have an extra
dynamic dispatch now.
Thank you Ailing for reporting this problem.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Differential Revision: D17581324
Pulled By: ezyang
fbshipit-source-id: e62cbb6cf497a7d6e53c4a24b905fef7a29b0826