-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't load TFLite model on Android/iOS - NODE PAD failed to prepare #48108
Comments
Could you verify whether the given input is valid in the above original onnx model and TF saved model? TF saved model can successfully handle the given inputs and we can easily spot the problem location. |
@abattery Thanks for taking a look. There's no problem with running the SavedModel directly: |
@nmfisher if possible, could you provide the saved model directory to us for debugging? |
@abattery Sure, here you go |
At the tf-nightly version, the above saved model is successfully converted and the converted model is executed well with the TFLite benchmark tool. |
I think the above input tensor should have the (1, 64, 11) shape but in the above your code, it sets a tensor data with the (1, 64, 1) shape.
-->
|
Thanks @abattery but the Python conversion isn't the problem - that completes successfully with either (1,64,1) or (1,64,11). The problem is the C++ code, which segfaults on the call to I've tried reshaping the tensors before calling AllocateTensors in C++, but this doesn't make a difference. |
Could you verify whether the TF version number, that the above C++ program was built with, is 2.4.1 or tf-nightly version? |
If possible, since the model is converted successfully with the tf-nightly version, please upgrade the TFLite C++ library in android/iOS to the tf-nightly version. |
@abattery I've tried building the TFLite C++ library both from 2.4.1 and nightly (and from the official Docker container, and directly from the Github repository with my existing NDK). None of those work. |
I actually successfully ran your model with the TFLite benchmark tool, which is built with TFLite C++ API including the AllocateTensors method invocation. Hmm.. I couldn't reproduce your issue. Could you make sure that the given TFLite model, being used for C++ API, is not an out-dated one? Can you verify whether the issue is reproducible with https://www.tensorflow.org/lite/performance/measurement#benchmark_tools ? |
Thanks @abattery - I just tried built with the latest master (1e8f466) (NOT nightly branch, which wouldn't even compile) and the model can now be properly loaded on both Android and iOS in both the benchmark tool and my C++ code. Thanks for the help, closing this issue. Also for future reference, are the nightly releases actually built from the nightly branch? |
In my understanding, they are built with the latest master branch. |
Thanks @abattery, I think that might have been my problem (trying to build from nightly branch). |
System information
Describe the current behavior
This model has been converted from Pytorch->ONNX->TFLite.
Loading the ONNX model, converting to a saved model, converting to TFLite and loading in the TFLite interpreter works fine in a notebook on nightly (da68297):
However, take the same model and load on Android (with TFLite C++)
and this will either fail with "NODE PAD failed to prepare" or crash with:
Occasionally it seems to successfully move past this Pad operation, and will then fail with Node number 2 (SPLIT) failed to prepare.
This happens no matter whether TFLite is built via the official Docker release (current nightly), with select ops, or from source nightly or source/2.4.1.
Also, the ONNX model cannot be converted to TFLite with 2.4.1, giving the following error:
If I set
converter.experimental_new_converter =False
, then I get the following error during conversion:I've tried manually setting the input shapes, and this then fails with other errors
Inspecting the original ONNX model via netron.app doesn't show anything unusual:
I think I did manage to successfully convert the model once (possibly with 2.3.1), but then experienced a similar "NODE xx failed to prepare" when running on Android.
The original model was from https://github.com/NVIDIA/NeMo/blob/ddd7e13cc0b81a377a55279eec7fe4ce0752f05e/tutorials/asr/07_Online_Offline_Microphone_VAD_Demo.ipynb, if that helps.
EDIT: on iOS with TFlite v2.4.1 built from source, the model converted with nightly errors with "Node number 2 (SPLIT) failed to prepare" and built with older version (2.3.1? not sure) errors with
Describe the expected behavior
The model should load properly on TFLite Android/iOS.
Standalone code to reproduce the issue
Provide a reproducible test case that is the bare minimum necessary to generate
the problem. If possible, please share a link to Colab/Jupyter/any notebook.
Other info / logs Include any logs or source code that would be helpful to
diagnose the problem. If including tracebacks, please include the full
traceback. Large logs and files should be attached.
The text was updated successfully, but these errors were encountered: