Use _sparse_coo_tensor_unsafe to shallow copy sparse tensors in accumulate_grad (#36292)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36292
As reported in https://github.com/pytorch/pytorch/issues/36120,
sparse_coo_tensor has some expensive checks and we were using that to shallow
copy a sparse tensor in AccumulateGrad. This can be avoided by using
_sparse_tensor_coo_unsafe since we're just reusing the indices and values from
a valid sparse tensor to shallow copy it.
Using the benchmark code mentioned in
https://github.com/pytorch/pytorch/issues/36120, these are the results:
1) 65.1 ms on master with this PR.
2) 127.5 ms for PyTorch 1.4
3) 916.5 ms on master without this patch.
ghstack-source-id: 101817209
Test Plan: waitforbuildbot
Differential Revision: D20935573
fbshipit-source-id: 4661bc779c06b47b5eb677e3fd4e192d1e3cba77