Add fp16 support to SparseLengthSum PyTorch operator (#41058)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41058
SparseLengthSum PyTorch operator just accept float and double type before, this diff add fp16 support to SparseLengthSum PT operator.
Reviewed By: jianyuh
Differential Revision: D22387253
fbshipit-source-id: 2a7d03ceaadbb7b04077cff72ab77da6457ba989