Move exponential_ from TH to Aten (CPU) (#32501)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32501
This diff will address https://github.com/pytorch/pytorch/issues/24699
We ask the input `lambda` to be >= 0 to be same as https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.exponential.html#numpy-random-exponential. This does not exist in the previous implementation.
Benchmark I am using PT operator microbenchmark
```
================================================================================
Before the change, Program Output:
================================================================================
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short
# Benchmarking PyTorch: exponential_
# Mode: Eager
# Name: exponential__M512_N512_cpu
# Input: M: 512, N: 512, device: cpu
Forward Execution Time (us) : 21311.746
================================================================================
After the change, Program Output:
================================================================================
# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short
# Benchmarking PyTorch: exponential_
# Mode: Eager
# Name: exponential__M512_N512_cpu
# Input: M: 512, N: 512, device: cpu
Forward Execution Time (us) : 20919.914
================================================================================
```
Test Plan: Sandcastle and Github tests
Reviewed By: BIT-silence
Differential Revision: D19518700
fbshipit-source-id: 0e79cb6a999c1278eb08b0d94cf61b119c85a36c