[PT] Allowing deepcopy in unitialized parameter (#83809)
Summary: UninitializedParameter overrides `__new__` method thus the parent class's `__deepcopy__` method doesn't work anymore, causing models using LazyModule cannot be instantiated.
Test Plan:
locally copied lazy module.
After change:
```
shenxiu@devbig1109:fbcode (5c57dd833)$ bento console --kernel pytorch --local
/data/users/shenxiu/fbsource/buck-out/v2/gen/fbcode/26f2c80c27f9e71d/bento/kernels/__bento_kernel_pytorch__/bento_kernel_pytorch#link-tree/scribeutil/lib.py:9: DeprecationWarning: The "thrift" clients in libfb.py.thrift_clients are not proper thrift clients, and often have unexpected or incorrect behaviour. They are also completely unsupported. Please use a supported client from https://fburl.com/srpy or a supported raw thrift client if you cannot use ServiceRouter.
from libfb.py.thrift_clients.scribe_thrift_client import ScribeThriftClient
/data/users/shenxiu/fbsource/buck-out/v2/gen/fbcode/26f2c80c27f9e71d/bento/kernels/__bento_kernel_pytorch__/bento_kernel_pytorch#link-tree/ipykernel/iostream.py:14: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
from imp import lock_held as import_lock_held
Python 3.8.6 (default, Jun 10 2022, 04:32:13)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.21.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import copy
...: import torch
...:
...: class LazyModule(torch.nn.Module):
...: def __init__(self):
...: super().__init__()
...: self.m = torch.nn.LazyLinear(10)
...:
...: def forward(self, input):
...: x = self.m(input)
...: return x
...:
...: m = LazyModule()
...: print(m.state_dict())
copy.deepcopy(m)
/data/users/shenxiu/fbsource/buck-out/v2/gen/fbcode/26f2c80c27f9e71d/bento/kernels/__bento_kernel_pytorch__/bento_kernel_pytorch#link-tree/mpmath/ctx_mp_python.py:892: SyntaxWarning: "is" with a literal. Did you mean "=="?
if other is 0:
/data/users/shenxiu/fbsource/buck-out/v2/gen/fbcode/26f2c80c27f9e71d/bento/kernels/__bento_kernel_pytorch__/bento_kernel_pytorch#link-tree/mpmath/ctx_mp_python.py:986: SyntaxWarning: "is" with a literal. Did you mean "=="?
if other is 0:
/data/users/shenxiu/fbsource/buck-out/v2/gen/fbcode/26f2c80c27f9e71d/bento/kernels/__bento_kernel_pytorch__/bento_kernel_pytorch#link-tree/sympy/solvers/diophantine.py:3188: SyntaxWarning: "is" with a literal. Did you mean "=="?
if feasible is 1: # it's prime and k == 2
/data/users/shenxiu/fbsource/buck-out/v2/gen/fbcode/26f2c80c27f9e71d/bento/kernels/__bento_kernel_pytorch__/bento_kernel_pytorch#link-tree/sympy/plotting/plot.py:520: SyntaxWarning: "is" with a literal. Did you mean "=="?
if self.xscale is 'log':
/data/users/shenxiu/fbsource/buck-out/v2/gen/fbcode/26f2c80c27f9e71d/bento/kernels/__bento_kernel_pytorch__/bento_kernel_pytorch#link-tree/sympy/plotting/plot.py:540: SyntaxWarning: "is" with a literal. Did you mean "=="?
if self.xscale is 'log':
/data/users/shenxiu/fbsource/buck-out/v2/gen/fbcode/26f2c80c27f9e71d/bento/kernels/__bento_kernel_pytorch__/bento_kernel_pytorch#link-tree/sympy/plotting/plot.py:553: SyntaxWarning: "is" with a literal. Did you mean "=="?
if self.xscale is 'log':
/data/users/shenxiu/fbsource/buck-out/v2/gen/fbcode/26f2c80c27f9e71d/bento/kernels/__bento_kernel_pytorch__/bento_kernel_pytorch#link-tree/sympy/plotting/plot.py:560: SyntaxWarning: "is" with a literal. Did you mean "=="?
if self.xscale is 'log':
OrderedDict([('m.weight', <UninitializedParameter>), ('m.bias', <UninitializedParameter>)])
In [2]: copy.deepcopy(m)
Out[2]:
LazyModule(
(m): LazyLinear(in_features=0, out_features=10, bias=True)
)
```
Before change, above code will give
```
TypeError: empty() received an invalid combination of arguments - got (int, dtype=NoneType, device=bool), but expected one of:
* (tuple of ints size, *, tuple of names names, torch.memory_format memory_format, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (tuple of ints size, *, torch.memory_format memory_format, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (tuple of SymInts size, *, torch.memory_format memory_format, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
```
Cloned n2369721 locally and successful (thru console not notebook because somehow bento notebook doesn't work with buck2 well).
Reviewed By: avilay
Differential Revision: D38866072
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83809
Approved by: https://github.com/ngimel