Stop moving scalars to GPU for one computation in leaky_rrelu_backward. (#50115)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50115
There is no way this is performant and we are trying to minimize the usage of scalar_to_tensor(..., device) since it is an anti-pattern, see https://github.com/pytorch/pytorch/issues/49758.
Test Plan: Imported from OSS
Reviewed By: mruberry
Differential Revision: D25790331
Pulled By: gchanan
fbshipit-source-id: 89d6f016dfd76197541b0fd8da4a462876dbf844