Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MLIR][ONNX] Add OnnxToTorch support for GlobalMaxPool Op #3232

Merged
merged 6 commits into from May 14, 2024

Conversation

NeverRaR
Copy link
Contributor

@NeverRaR NeverRaR commented Apr 25, 2024

@NeverRaR NeverRaR force-pushed the never/onnx_global_max_pool branch 2 times, most recently from 73c3ab1 to 9b006a7 Compare April 25, 2024 12:05
@penguin-wwy penguin-wwy requested review from rsuderman, vivekkhandelwal1 and renxida and removed request for vivekkhandelwal1 April 26, 2024 07:52
lib/Conversion/TorchOnnxToTorch/DefaultDomainGtoP.cpp Outdated Show resolved Hide resolved
lib/Conversion/TorchOnnxToTorch/DefaultDomainGtoP.cpp Outdated Show resolved Hide resolved
lib/Conversion/TorchOnnxToTorch/DefaultDomainGtoP.cpp Outdated Show resolved Hide resolved
lib/Conversion/TorchOnnxToTorch/DefaultDomainGtoP.cpp Outdated Show resolved Hide resolved
lib/Conversion/TorchOnnxToTorch/DefaultDomainGtoP.cpp Outdated Show resolved Hide resolved
@renxida
Copy link
Collaborator

renxida commented Apr 30, 2024

Hello! Welcome!

Nice PR & much appreciated! Left some comments for code quality. I hope you find them helpful.

@NeverRaR NeverRaR requested a review from renxida May 6, 2024 03:21
Copy link
Collaborator

@vivekkhandelwal1 vivekkhandelwal1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @NeverRaR, I see many things done wrongly in this lowering. Please correct them.

  • First of all, you should not be lowering this op to AdaptiveMaxPooling, instead use "AtenMaxPooldOp".
  • Your lowering returns 2 results while the original Onnx op expects 1 result.
  • The logic for the computation of output shape is also wrong, we can't just assume any dynamic dim to be 1.

lib/Conversion/TorchOnnxToTorch/DefaultDomainGtoP.cpp Outdated Show resolved Hide resolved
lib/Conversion/TorchOnnxToTorch/DefaultDomainGtoP.cpp Outdated Show resolved Hide resolved
@NeverRaR
Copy link
Contributor Author

NeverRaR commented May 7, 2024

Hi @NeverRaR, I see many things done wrongly in this lowering. Please correct them.

  • First of all, you should not be lowering this op to AdaptiveMaxPooling, instead use "AtenMaxPooldOp".
  • Your lowering returns 2 results while the original Onnx op expects 1 result.
  • The logic for the computation of output shape is also wrong, we can't just assume any dynamic dim to be 1.

@vivekkhandelwal1

  • The reason for using AdaptiveMaxPooling is that the current torch-mlir does not support AtenAdaptiveMaxPool1d, and my model happens to require this situation.
  • Directly replacing onnx.GlobalMaxPool with AdaptiveMaxPool will still use the result it needs (the first result of AtenAdaptiveMaxPool)
  • And why it is assumed that all dynamic dims are 1. Actually, according to the onnx specification:

Output data tensor from pooling across the input tensor The output tensor has the same rank as the input The first two dimensions of output shape are the same as the input (N x C), while the other dimensions are all 1

@vivekkhandelwal1
Copy link
Collaborator

vivekkhandelwal1 commented May 7, 2024

  • The reason for using AdaptiveMaxPooling is that the current torch-mlir does not support AtenAdaptiveMaxPool1d, and my model happens to require this situation.

If that's the case, you could have added a lowering for AtenMaxPool1dOp.

  • Directly replacing onnx.GlobalMaxPool with AdaptiveMaxPool will still use the result it needs (the first result of AtenAdaptiveMaxPool)

No, that's not true. You can't replace an op with one result with an op with different number of results. See the crash in the CI for the same.

  torch-mlir-opt: /_work/torch-mlir/torch-mlir/externals/llvm-project/mlir/lib/Transforms/Utils/DialectConversion.cpp:1645: virtual void mlir::ConversionPatternRewriter::replaceOp(Operation *, ValueRange): Assertion `op->getNumResults() == newValues.size() && "incorrect # of replacement values"' failed.
  • And why it is assumed that all dynamic dims are 1. Actually, according to the onnx specification:

Output data tensor from pooling across the input tensor The output tensor has the same rank as the input The first two dimensions of output shape are the same as the input (N x C), while the other dimensions are all 1

If that's the case then there's no need to put the check, just assign value 1.

@NeverRaR
Copy link
Contributor Author

NeverRaR commented May 7, 2024

  • The reason for using AdaptiveMaxPooling is that the current torch-mlir does not support AtenAdaptiveMaxPool1d, and my model happens to require this situation.

If that's the case, you could have added a lowering for AtenMaxPool1dOp.

  • Directly replacing onnx.GlobalMaxPool with AdaptiveMaxPool will still use the result it needs (the first result of AtenAdaptiveMaxPool)

No, that's not true. You can't replace an op with one result with an op with different number of results. See the crash in the CI for the same.

  torch-mlir-opt: /_work/torch-mlir/torch-mlir/externals/llvm-project/mlir/lib/Transforms/Utils/DialectConversion.cpp:1645: virtual void mlir::ConversionPatternRewriter::replaceOp(Operation *, ValueRange): Assertion `op->getNumResults() == newValues.size() && "incorrect # of replacement values"' failed.
  • And why it is assumed that all dynamic dims are 1. Actually, according to the onnx specification:

Output data tensor from pooling across the input tensor The output tensor has the same rank as the input The first two dimensions of output shape are the same as the input (N x C), while the other dimensions are all 1

If that's the case then there's no need to put the check, just assign value 1.

Okay, I think I understand now. All modifications have been completed.I will added a lowering for AtenMaxPool1dOp in next pr.

@NeverRaR
Copy link
Contributor Author

NeverRaR commented May 7, 2024

@vivekkhandelwal1 Does the failure of CI seem unrelated to my code?

@vivekkhandelwal1
Copy link
Collaborator

@vivekkhandelwal1 Does the failure of CI seem unrelated to my code?

Yes

@NeverRaR NeverRaR force-pushed the never/onnx_global_max_pool branch from 15596c0 to 5f85ed4 Compare May 11, 2024 06:21
Copy link
Collaborator

@vivekkhandelwal1 vivekkhandelwal1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@vivekkhandelwal1
Copy link
Collaborator

@NeverRaR, you need approval from other reviewers as well to merge this.

@vivekkhandelwal1 vivekkhandelwal1 merged commit 26b7828 into llvm:main May 14, 2024
3 checks passed
BaneTrifa pushed a commit to BaneTrifa/torch-mlir that referenced this pull request May 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants