Torchbench model tolerance changes (#108598)
Move detectron2_fcos_r_50_fpn to amp. The minifier showed the following snippet as causing the divergence, where inductor has better numerics than eager:
```
import torch
def foo(x):
return x > .2
inp = torch.tensor([.2002], device="cuda", dtype=torch.bfloat16)
print(foo(inp))
print(torch.compile(foo)(inp))
```
doctr_reco_predictor had very minimal divergence (.002 vs .001 required), bumping tolerance here.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108598
Approved by: https://github.com/shunting314