-
Notifications
You must be signed in to change notification settings - Fork 616
[TOSA] Fix conv2d.padding decomposition to emit 2D padding for aten.convolution #4380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
sjarus
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there an e2e test that exercises this functionality ?
I did not write an e2e test that directly calls In fact, an e2e test for valid padding already exists and was added as part of #3883 here but it does not catch this issue as ExportedProgram decompositions are not run as part of the test. Is there another way to lock this down with an e2e test? |
|
I believe we run decompositions in the fx_importer tests. Let me double check. |
|
Oh, we do not seem to run decompositions... This doesn't seem right. I think we almost certainly want to run certain decompositions. There should be a default list somewhere in the codebase. Do you know in which versions of pytorch the issue referenced in this PR gets reproduced? |
| // padding {height, width}. The PyTorch OFM computation uses 2*pad in each | ||
| // spatial direction, implying the same top=bottom=height and left=right=width | ||
| // values for TOSA. | ||
| SmallVector<int64_t> padding( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does linalg path need a similar fix? If so, it may be better to add a canonicalization pattern for torch.aten.convolution so that padding is extended to be 2 dimension for all TorchToXXX paths in one place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
torch-to-linalg asserts due the same assumption here:
| assert(static_cast<int64_t>(unpaddedDims + padding.size()) == |
torch-to-stablehlo also asserts due to the same assumption here:
| assert(padding.size() == dilation.size() && |
So it looks like this is similar to #3885. Shall I proceed with a cononicalization pattern similar to the proposed fix in #4250 ?
I used 2.10.0.dev20251016+cpu to repro this issue. |
I think we do run some decompositions by default
torch-mlir/python/torch_mlir/fx.py Line 103 in b834f94
conv.padding variant is not in that list. It doesn't seem trivial to specify a non-default decomp list from the e2e list.
|
torch.nn.Conv2dwithpadding = 'valid'generates atorch.ops.aten.conv2d.paddingop in ExportedProgram, which is later decomposed totorch.ops.aten.convolution.defaultwith a single padding value of[0]after runningep.run_decompositions().This causes a compiler failure in
torch-to-tosabecause the code expects padding to have two elements.Here is the print of the
ExportedProgramNote: A previous, correct decomposition for a similar issue exists and is locked down by e2e tests (refer to relevant context like #3883). This fix aligns the conv2d.padding decomposition with that correct behavior.
Since locking this specific decomposition down with end-to-end (e2e) tests was problematic, a dedicated LIT test has been added to the test suite to ensure this specific padding regression cannot reoccur.