Fix inf norm grad (#48122)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/41779
Also fixes an issue with inf norm returning small non-zero values due to usage of `numeric_limit::min` which actually "returns the minimum positive normalized value" when applied to floating-point numbers. See https://en.cppreference.com/w/cpp/types/numeric_limits/min.
```
>>> import torch
>>> with torch.enable_grad():
... a = torch.tensor([
... [9., 2., 9.],
... [-2., -3., -4.],
... [7., 8., -9.],
... ], requires_grad=True)
... b = torch.norm(a, p=float('inf'))
... b.backward()
... print(a.grad)
...
tensor([[ 0.3333, 0.0000, 0.3333],
[-0.0000, -0.0000, -0.0000],
[ 0.0000, 0.0000, -0.3333]])
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48122
Reviewed By: izdeby
Differential Revision: D25093315
Pulled By: soulitzer
fbshipit-source-id: be1a7af32fe8bac0df877971fd75089d33e4bd43