-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] Adds Support for Some Bitwise Ops in Onnx Exporter #126229
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/126229
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 508aeb7 with merge base 82c66bc (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
To be completely honest, I'm not very familiar with the internals of torch.onnx.export. Are decomposition passes applied to a graph before converting torch fx nodes to onnx nodes? I also believe there is some automatic dtype promotion that occurs, but I got errors when leaving out the explicit type promotion in the conversion for "aten::bitwise_and". In any case, let me know if there is a better way of adding support for these ops and I can make changes. |
This comment was marked as off-topic.
This comment was marked as off-topic.
@JeremyGe07 With onnx opset version 18, onnx.ReduceMean switched from using axes as an attribute to using axes as an optional input tensor. See onnx docs. You could try exporting with an opset version < 18 (as long as you don't need the support from newer opset versions). Another thing that might be helpful if you want the most recent changes is to try using a nightly build of torch with
I'd recommend submitting an issue for the ReduceMean problem, since this PR is likely not the best place to discuss this. |
If you intend to export fx graphs to ONNX. I recommend using the |
Ah, that does seem to be the consensus. I'm submitting this PR since I'm working on projects which use |
The pr itself looks good. Some tests may be preferable. |
@pytorchbot rebase |
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
Successfully rebased |
095bdba
to
f4f224f
Compare
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Good point. I'm happy to add some tests (particularly for |
Merge failedReason: 1 mandatory check(s) failed. The first few are: Dig deeper by viewing the failures on hud |
@@ -35,6 +35,30 @@ | |||
_onnx_symbolic = functools.partial(registration.onnx_symbolic, opset=18) | |||
|
|||
|
|||
@_onnx_symbolic("aten::bitwise_and") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Expose op to line 31
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same for the other two ops
e624ba5
to
b31e666
Compare
I have a fix for most of the CI issues, but I'm not sure how to resolve the docs error:
I'm able to reproduce the error, but I don't know what .rst to edit. Can someone give me advice on this? |
I thnk it should be this part that we have to add the new op that we use: pytorch/torch/onnx/symbolic_opset18.py Lines 7 to 17 in 85fd76f
I reference on this: https://github.com/pytorch/pytorch/pull/118828/files#diff-4e77adb023a4e15f75009c08294251dfe9c3b7bb6947e7f7b4609c65edb53b20 |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
@zjgarvey Thank you! |
Addresses #126194
Adds support for