Add's workaround for ScalarType::Byte for cuda (#35027)
Summary:
This PR add's a workaround for `cuda` for `ScalarType::Byte` for the `AT_DISPATCH_*` macros.
As discussed here:
https://github.com/pytorch/pytorch/issues/34826
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35027
Differential Revision: D20596555
Pulled By: colesbury
fbshipit-source-id: 72e842603723a5aa146e4224e79befafc62f2624