New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensorflow model is working in python but converted tfjs model is not working #8222
Comments
@gaikwadrahul8 Thank you for being assignees. Please tell me when you got any bottlenecks or need any information |
Hi, @newgrit1004 We sincerely apologize for the delayed response. I see you've provided your Github repo but I don't see the To expedite our investigation into the error you encountered, we would be grateful if you could share your converted tfjs model so I'll try to run with your provided tfjs-code to replicate the same behavior from my end also By gathering this information, we can attempt to reproduce the error on our end and conduct a more thorough root cause analysis. Thank you for your cooperation and patience. |
Hi, @gaikwadrahul8 I forked the repository so you can easily reproduce the result. Check it out here I also upload the model files except some models that exceed 100MB. You can just use the model files or generate the model files by following the README.md I hope this issue can be solved. Thank you. Also, feel free to ask anything when you reproduce this. |
Also, the reason why I exported the onnx model with opset version 12 is because of the known error. I can avoid the known error when I exported the onnx model with opset version 12. |
Hi, @newgrit1004 I apologize for the delayed response and thank you for sharing your Github repo with us, I'm able to reproduce the same error which you reported in your issue template so we'll have to dig more into this issue and will update you soon. thank you for bringing this issue to our attention, I really appreciate your valuable time and efforts For reference I have added output screenshot below : Thank you for your cooperation and patience. |
Hi @newgrit1004, This repo is |
Hi, @kaka-lin |
Please make sure that this is a bug. As per our
GitHub Policy,
we only address code/doc bugs, performance issues, feature requests and
build/installation issues on GitHub. tag:bug_template
System information
Describe the current behavior
I tried to convert the EfficientSAM-ti(https://github.com/yformer/EfficientSAM) decoder into tensorflow.js.
The origin model was pytorch so I need to do this step by step.
Torch -> Onnx
git clone https://github.com/yformer/EfficientSAM.git
Then replace the code that uses torch.tile function in efficient_sam_decoder with the following code since torch.tile is not supported in onnx opset 12.
https://github.com/yformer/EfficientSAM/blob/c9408a74b1db85e7831977c66e9462c6f4891729/efficient_sam/efficient_sam_decoder.py#L259
Then use this script to export onnx model. Set the opset to 12.
https://github.com/yformer/EfficientSAM/blob/main/export_to_onnx.py
onnx to tensorflow
tensorflow to tensorflow.js
I compare the result between pytorch and tensorflow and check that they are same.
My tensorflow inference code is this. The reference of this code is https://github.com/yformer/EfficientSAM/blob/main/EfficientSAM_example.py
Finally, I want to run the tfjs code, however the error message is very frustrating.
I think the dynamic input shape should be the problem.
shape of pointLabels = [1, 1, number of points]
shape of pointCoords = [1, 1, number of points, 2]
but I want to keep the dynamic input.
How can I modify the code or command?
The text was updated successfully, but these errors were encountered: