Disable tests that use bfloat 16 for SM < 80 (#118449)
```
`torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: Internal Triton PTX codegen error:
ptxas /tmp/compile-ptx-src-83b319, line 51; error : Feature '.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-83b319, line 51; error : Feature 'cvt with .f32.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-83b319, line 59; error : Feature '.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-83b319, line 59; error : Feature 'cvt with .f32.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-83b319, line 65; error : Feature '.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-83b319, line 65; error : Feature 'cvt.bf16.f32' requires .target sm_80 or higher
ptxas fatal : Ptx assembly aborted due to errors
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor.py -k test_bfloat16_to_int16_cuda`
```
Fixed test failure that uses bfloat 16 on pre SM80 (V100 is where the test failure is seen for this test)
See also #113384
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118449
Approved by: https://github.com/eqy, https://github.com/peterbell10