Using _floats_wrapper in per_channel_tensor generation (#31780)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31780
We need to specify width to ensure the generated float is representable by `float32`
fixes: https://github.com/pytorch/pytorch/issues/31774
Test Plan:
ci
Imported from OSS
Differential Revision: D19275165
fbshipit-source-id: 50560b4208c562b6bcd2abccadd234f29fbb4b0a