[MPS] Initialize `MPSDevice::_mtl_device` property to `nil` (#78136)
This prevents `import torch` accidentally crash on machines with no metal devices
Should prevent crashes reported in https://github.com/pytorch/pytorch/pull/77662#issuecomment-1134637986 and https://github.com/pytorch/functorch/runs/6560056366?check_suite_focus=true
Backtrace to the crash:
```
(lldb) bt
* thread #1, stop reason = signal SIGSTOP
* frame #0: 0x00007fff7202be57 libobjc.A.dylib`objc_msgSend + 23
frame #1: 0x000000010fd9f524 libtorch_cpu.dylib`at::mps::HeapAllocator::MPSHeapAllocatorImpl::MPSHeapAllocatorImpl() + 436
frame #2: 0x000000010fda011d libtorch_cpu.dylib`_GLOBAL__sub_I_MPSAllocator.mm + 125
frame #3: 0x000000010ada81e3 dyld`ImageLoaderMachO::doModInitFunctions(ImageLoader::LinkContext const&) + 535
frame #4: 0x000000010ada85ee dyld`ImageLoaderMachO::doInitialization(ImageLoader::LinkContext const&) + 40(lldb) up
frame #1: 0x000000010fd9f524 libtorch_cpu.dylib`at::mps::HeapAllocator::MPSHeapAllocatorImpl::MPSHeapAllocatorImpl() + 436
libtorch_cpu.dylib`at::mps::HeapAllocator::MPSHeapAllocatorImpl::MPSHeapAllocatorImpl:
-> 0x10fd9f524 <+436>: movq %rax, 0x1b0(%rbx)
0x10fd9f52b <+443>: movw $0x0, 0x1b8(%rbx)
0x10fd9f534 <+452>: addq $0x8, %rsp
0x10fd9f538 <+456>: popq %rbx
(lldb) disassemble
...
0x10fd9f514 <+420>: movq 0xf19ad15(%rip), %rsi ; "maxBufferLength"
0x10fd9f51b <+427>: movq %r14, %rdi
0x10fd9f51e <+430>: callq *0xeaa326c(%rip) ; (void *)0x00007fff7202be40: objc_msgSend
```
which corresponds to `[m_device maxBufferLength]` call, where `m_device` is not initialized in
https://github.com/pytorch/pytorch/blob/2ae3c59e4bcb8e6e75b4a942cacc2d338c88e609/aten/src/ATen/mps/MPSAllocator.h#L171
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78136
Approved by: https://github.com/seemethere