Looks like this is a known issue and won't be fixed until the new torch.onnx.dynamo_export
is developed. pytorch/pytorch#105134 (comment)
However, there are workarounds, which we could look into.
Patching the folding
function with:
def folding(self, patches: torch.Tensor, output_size: Tuple[int, int]) -> torch.Tensor:
batch_size, in_dim, patch_size, n_patches = patches.shape
patches = patches.reshape(batch_size, in_dim * patch_size, n_patches)
# Calculate the number of patches in each dimension
n_patches_height = int(n_patches ** 0.5)
n_patches_width = n_patches_height
# Initialize the output feature map
feature_map = torch.zeros((batch_size, in_dim, output_size[0], output_size[1]), device=patches.device)
# Iterate over each patch and place it in the correct position in the feature map
for i in range(n_patches_height):
for j in range(n_patches_width):
patch_idx = i * n_patches_width + j
patch = patches[:, :, patch_idx]
patch = patch.reshape(batch_size, in_dim, self.patch_height, self.patch_width)
feature_map[:, :, i*self.patch_height:(i+1)*self.patch_height, j*self.patch_width:(j+1)*self.patch_width] = patch
return feature_map
and setting opset=12 seems to give equivalent results. Doesn't support dynamic width/height though, but shouldn't be a problem since the processor resizes/crops to 256x256 anyway.
This PR has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.
Login to write a write a comment.
What does this PR do?
Attempts to add support for mobilevitv2 ONNX export. However, I've run into a few issues:
e.g., running:
DEFAULT_ONNX_OPSET=18
, I get:Fixes # (issue)
Before submitting
Who can review?
@fxmarty @echarlaix