Stop using c10::scalar_to_tensor in float_power. (#50105)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50105
There should be no functional change here.
A couple of reasons here:
1) This function is generally an anti-pattern (https://github.com/pytorch/pytorch/issues/49758) and it is good to minimize its usage in the code base.
2) pow itself has a fair amount of smarts like not broadcasting scalar/tensor combinations and we should defer to it.
Test Plan: Imported from OSS
Reviewed By: mruberry
Differential Revision: D25786172
Pulled By: gchanan
fbshipit-source-id: 89de03aa0b900ce011a62911224a5441f15e331a