[NestedTensor] Call contiguous in linear backward (#94317)
Fixes #94303
If in upward grad for linear_backward was discontiguous we would throw a torch check. This updates the implementation to instead call contiguous and changes the check to an internal assert.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94317
Approved by: https://github.com/mikaylagawarecki