Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use PyTorch or TensorFlow to check the predicted value #128

Open
Sniper1-love opened this issue Mar 5, 2024 · 1 comment
Open

Comments

@Sniper1-love
Copy link

Thank you very much for opening up such a dataset. However, I want to ask a question.
I can use meter.predictor to predict the inference latency about "id": "resnet34_350"(a example)
And then, I want to check whether the prediction value is corrected. So I want to run resnet34_350 using PyTorch. But How can I make the structure of resnet34_350 "{"input_im_0": {"inbounds": [], "attr": {"name": "input_im_0", "type": "Placeholded······" used in Pytorch code.
I will be very appreciate that you can answer my question.

@JiahangXu
Copy link
Collaborator

Thank you for raising this issue. Actually the conversion from PyTorch model to nn-Meter IR is unidirectional. As a result, it’s not possible to automatically generate Python code or ONNX model from nn-Meter IR. However, you can manually perform the conversion from the PyTorch model to nn-Meter IR and afterwards compare the results to ensure their consistency.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants