Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QNN Error with GroupNorm #2669

Open
escorciav opened this issue Jan 25, 2024 · 2 comments
Open

QNN Error with GroupNorm #2669

escorciav opened this issue Jan 25, 2024 · 2 comments
Labels
QNN Issues related to QNN

Comments

@escorciav
Copy link

escorciav commented Jan 25, 2024

Hi,
I was wondering how are you guys dealing with GroupNorm. I discovered the following weird situation in S8G2 DSP (HTP, chipset: SM8550).

GroupNorm(groups=32) 1, 64, 512, 512 1, 64, 256, 256 1, 64, 128, 128 1, 64, 64, 64
msec 1528.3 371.1 error 0.794

The test used QNN 2.16.

Do you know how can I report that bug to the QNN team?

Thanks!

@quic-hitameht
Copy link
Contributor

Tagging @quic-akinlawo @quic-mangal here.

@quic-hitameht quic-hitameht added the QNN Issues related to QNN label Jan 29, 2024
@escorciav
Copy link
Author

escorciav commented Feb 22, 2024

It may sound odd, but what can I do it's what it's, i.e., hardware & on-device stuff aka too many moving wheels

I can't replicate the error for input 1,64,128,128. The model using such op compile & latency performance is shown in the screenshot below. First column is msec & second the version of QNN
image

Details (for record)

  • I cannot rplicate the QNN compilation error anymore
  • I tried different versions of QNN & onnx opset version.
  • It's possible that the version of onnx & PyTorch itself plays a role. I wasted some time testing things out. But, decided to move on & enjoy the good news.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
QNN Issues related to QNN
Projects
None yet
Development

No branches or pull requests

2 participants