Deallocate constant when it is no longer needed in constant folding (#106216)
Differential Revision: [D47881214](https://our.internmc.facebook.com/intern/diff/D47881214)
tested locally with :
```
@torch.compile()
def foo():
size_gb = 1
size_bytes = size_gb * 1024 * 1024 * 1024 * 20
# Allocate the tensor on the GPU
tensor = torch.empty(size_bytes // 4, device='cuda') # Divide by 4 to allocate float32 elements
for _ in range(10):
tensor = tensor + 1
return tensor
foo()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106216
Approved by: https://github.com/Skylion007