[spmd compile API] add a (temporary) mechanism for overriding input tensors' placements (#98391)
Currently, the compile API assumes all input tensors' shard dimension is the first dimension. dtensor expansion doesn't work when there are input tensors whose shard dimension is not the first dimension.
In addtion, respect non-tensor inputs beyond nn.Module and optim.Optimizers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98391
Approved by: https://github.com/mrshenli