[ONNX] Reduce exporter memory usage by removing intermediate values (#101148)
This commit reduces the exporter memory usage by as much as 50%. During the shape inference step, the exporter caches the values of intermediate tensors in a `ConstantValueMap`. This can use as much memory as the model itself, or even more. For example, model weight tensors are often fed to a Transpose layer, and the output of that is the same size of the weights. This commit fixes the issue by removing the intermediate tensor values after they are used by all consumers.
The cached values are only used for shape inference, so removing them after use should be safe. `ConstantValueMap` is cleared anyways once shape inference is complete for the entire graph.
As an example, here is the model from issue #61263:
```python
import torch
import math
# Size in GB
tensor_size = 1
model_size = 8
layers_num = model_size // tensor_size
kB = 1024
MB = kB * kB
GB = MB * kB
precision_size = 4 # bytes per float
activation_size = math.floor(math.sqrt(tensor_size * GB / precision_size))
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
for i in range(layers_num):
name = "fc_%d" % i
linear = torch.nn.Linear(activation_size, activation_size)
setattr(self, name, linear)
def forward(self, x):
for i in range(layers_num):
name = "fc_%d" % i
linear = getattr(self, name)
x = linear(x)
return x
model = Net().cuda()
input = torch.zeros(activation_size, requires_grad=True).cuda()
with torch.no_grad():
torch.onnx.export(model, (input, ), './model_large.onnx', do_constant_folding=False, opset_version=13)
```
It is just some large linear layers stacked together. Before this commit, my max GPU usage during export was about 16.7 GB, twice the model size. With this commit in combination with #101134, it was only about 9.5 GB.
Together with #101134, fixes issue #61263
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101148
Approved by: https://github.com/BowenBao