Add BF16 type to _autocast_to_full_precision (#67707)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67707
https://github.com/pytorch/pytorch/pull/63939/files has added FP16 support to torchscript.
This is to add BF16 device type when doing full conversion.
Test Plan: Unit test. Also tested BF16 locally on A100 using MLP model.
Reviewed By: idning
Differential Revision: D32027152
fbshipit-source-id: b2a5ff2b22ea1e02306b0399f2b39b8493be4f45