Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runtime error: failed to find input Node 'image_tensor' after conversion from protobuf to tflite model file #25171

Closed
defaultUser3214 opened this issue Jan 24, 2019 · 11 comments
Assignees
Labels
comp:lite TF Lite related issues stat:awaiting response Status - Awaiting response from author

Comments

@defaultUser3214
Copy link

defaultUser3214 commented Jan 24, 2019

Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No I used the DetectorActivity of android mobile demo app (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android) and the TensorFlow Lite demo app (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/java/demo)
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux 18.04.1 LTS and LineageOS Android 8.1
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: Oneplus 5
  • TensorFlow installed from (source or binary): compiled from source with cuda 10 and bazel
  • TensorFlow version ( use command below): in Python 2.7 (command python): (tf.GIT_VERSION, tf.VERSION) = ('v1.12.0-0-ga6d8ffae09', '1.12.0') in Python 3.6.7 (command python3): b'v1.9.0-rc2-5108-g4e06be5f8f' 1.12.0-rc0
  • Python version: Python 2.7.15rc1 and Python 3.6.7
  • Bazel version (if compiling from source): either 0.20.0 or 0.21.0 (I don't know with which version I compiled the tensorflow installed in python but I used 0.20.0 in order to execute the bazel commands for the transform of the protobuf to the tflite file - I think it wasn't possible to use bazel 0.21 therefore)
  • GCC/Compiler version (if compiling from source): gcc (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
  • CUDA/cuDNN version: Cuda 10.0 und cuDNN 7.3.1
  • GPU model and memory: GeForce GTX 970, RAM total: 16345820 and RAM swap: 2097148

Describe the current behavior

Because I assumed that protobuf files were much slower than .tflite files I tried to converted a .pb to a .tflite:
Thus I downloaded the r1.95 branch of Tensorflow and converted the frozen_inference_graph.pb from (https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) to a .tflite using #15633 (comment) and #15633 (comment). This worked well!

The .pb file worked well with my Android App, but after copying the .tflite model to the app/assets directory of the TensorFlow Mobile demo App (from https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android) and replacing the .pb file in the code with
` private static final DetectorMode MODE = DetectorMode.TF_OD_API;

private static final int TF_OD_API_INPUT_SIZE = 300;
private static final String TF_OD_API_MODEL_FILE =
        "file:///android_asset/frozen_inference_graph.pb";
private static final String TF_OD_API_LABELS_FILE = "file:///android_asset/coco_labels_list.txt";`

the following runtime error appears:

E/AndroidRuntime: FATAL EXCEPTION: main Process: myPackage.myProcess, PID: 15491 **java.lang.RuntimeException: Failed to find input Node 'image_tensor'** at myPackage.myProcess.myClass.TensorFlowObjectDetectionAPIModel.create(TensorFlowObjectDetectionAPIModel.java:106) at myPackage.myProcess.myClass.DetectorActivity.onPreviewSizeChosen(DetectorActivity.java:146) at myPackage.myProcess.myClass.CameraActivity$5.onPreviewSizeChosen(CameraActivity.java:370) at myPackage.myProcess.myClass.CameraConnectionFragment.setUpCameraOutputs(CameraConnectionFragment.java:412) at myPackage.myProcess.myClass.CameraConnectionFragment.openCamera(CameraConnectionFragment.java:419) at myPackage.myProcess.myClass.CameraConnectionFragment.access$000(CameraConnectionFragment.java:66) at myPackage.myProcess.myClass.CameraConnectionFragment$1.onSurfaceTextureAvailable(CameraConnectionFragment.java:97) at android.view.TextureView.getHardwareLayer(TextureView.java:390)
I think there is a similar issue #22565 .

Describe the expected behavior
I would have expected that the .tflite version works because the .pb version of the same ssd_model works well!
Code to reproduce the issue

  1. Download the Tensorflow Mobile Demo App from the Link from above
  2. Compile the tensorflow 1.12 version from source using the mentioned cuda settings
  3. Download the model from model zoo
  4. Execute the bazel commands from [Question&Error] Is there detection model like a SSD-Mobile-net in tensorflow-lite? #15633 (comment) and [Question&Error] Is there detection model like a SSD-Mobile-net in tensorflow-lite? #15633 (comment)

Download and extract SSD MobileNet model

wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2017_11_17.tar.gz
tar -xvf ssd_mobilenet_v1_coco_2017_11_17.tar.gz
DETECT_PB=$PWD/ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb
STRIPPED_PB=$PWD/frozen_inference_graph_stripped.pb
DETECT_FB=$PWD/tensorflow/contrib/lite/examples/android/assets/mobilenet_ssd.tflite

Strip out problematic nodes before even letting TOCO see the graphdef

bazel run -c opt tensorflow/python/tools/optimize_for_inference --
--input=$DETECT_PB --output=$STRIPPED_PB --frozen_graph=True
--input_names=Preprocessor/sub --output_names=concat,concat_1
--alsologtostderr

Run TOCO conversion.

bazel run tensorflow/lite/toco:toco --
--input_file=$STRIPPED_PB --output_file=$DETECT_FB
--input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE
--input_shapes=1,300,300,3 --input_arrays=Preprocessor/sub
--output_arrays=concat,concat_1 --inference_type=FLOAT --logtostderr

Build and install the demo

bazel build -c opt --cxxopt='--std=c++11' //tensorflow/contrib/lite/examples/android:tflite_demo
adb install -r -f bazel-bin/tensorflow/contrib/lite/examples/android/tflite_demo.apk

  1. Then you have to copy the .tflite model to you apps app/assert directory and refer to it in you code like mentioned above
  2. After the gradle build and running the app on you phone you might probably get the runtime error I did.

Other info / logs

bug_tracker_bazel_run_warnings.txt
bug_tracker_runtime_error.txt

@freedomtan
Copy link
Contributor

freedomtan commented Jan 25, 2019

I really cannot tell what the app you tried to use is TF Mobile one or TFLite one. If what you want to try is the TFLite one, you should read this article first.

@shashishekhar shashishekhar added the comp:lite TF Lite related issues label Jan 29, 2019
@defaultUser3214
Copy link
Author

defaultUser3214 commented Feb 1, 2019

Thanks, the bazel build command in that article solved my problem. Do you know a method how to get the output dimensions of the mobile_ssd_v2_coco file ( found under https://storage.googleapis.com/download.tensorflow.org/models/tflite/gpu/mobile_ssd_v2_float_coco.tflite) that is referred at the tutorial https://www.tensorflow.org/lite/performance/gpu#supported_models_and_ops? I would like to know how my outputMap has to be structured:
tflite.runForMultipleInputsOutputs(inputArray,outputMap);

I have tried to use

bazel run tensorflow/tools/graph_transforms:summarize_graph -- --in_graph=/abolutePath/src/main/assets/mobile_ssd_v2_float_coco.tflite

No inputs spotted.
No variables spotted.
No outputs spotted.
Found 0 (0) const parameters, 0 (0) variable parameters, and 0 control_edges
Op types used:
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/home/stephie/Documents/BA/Tensorflow-stable-v1.12.0/tensorflow/tensorflow/lite/java/demo/app/src/main/assets/mobile_ssd_v2_float_coco.tflite --show_flops --input_layer= --input_layer_type= --input_layer_shape= --output_layer=

When I analyse a .pb file the command even doesn'tell my how my outputMap has to be formed.

Thank you so much!!

@freedomtan
Copy link
Contributor

summary_graph is for TensorFlow's .pb files. It doesn't handle TFLite FlatBuffer file. One line python script like this, let's call it foo.py, could print output nodes

import sys
from tensorflow.lite.python import interpreter as interpreter_wrapper

interpreter = interpreter_wrapper.Interpreter(model_path=sys.argv[1])
print(interpreter.get_output_details())

Run it,

> python /tmp/foo.py /tmp/mobile_ssd_v2_float_coco.tflite
{'index': 307, 'shape': array([   1, 2034,    4], dtype=int32), 'quantization': (0.0, 0L), 'name': 'raw_outputs/box_encodings', 'dtype': <type 'numpy.float32'>}, {'index': 308, 'shape': array([   1, 2034,   91], dtype=int32), 'quantization': (0.0, 0L), 'name': 'raw_outputs/class_predictions', 'dtype': <type 'numpy.float32'>}

It seems the model doesn't come with post-processing nodes.

@defaultUser3214
Copy link
Author

@freedomtan Thank you so much for you answer, which so super helpful!! Could you please tell me where I can find some documentation about pre-/ post-processing nodes of models that are useable in Tensorflow Lite? I have already searched but only found stuff describing that post-processing of bounding boxes is drawing them. That was not so helpful.

@defaultUser3214
Copy link
Author

@freedomtan the image_tensor error from above only appears in the tfMobile App but in the tfLite demo app it could be resolved by using your link. Is tfMobile only able to use protobuf files and this is the reason for the error?

@4nonymou5
Copy link

@defaultUser3214
were you able to find anything for the post-processing of the object detection model used with tflite gpu snippet ?

@freedomtan
Copy link
Contributor

@defaultUser3214 Surely TFLite doesn't have image_tensor problem, because there is no such node if you follow the article I mentioned. I don't know if there is any documentation on preprocessing and postprocessing of the model. I guess most people figure it out by reading paper and source code.

@gargn gargn assigned achowdhery and unassigned gargn Mar 12, 2019
@achowdhery
Copy link

@freedomtan Is this question regarding the CPU execution or the GPU execution? The CPU execution was specified in the article

@freedomtan
Copy link
Contributor

@achowdhery I don't have question. I was trying to answer @defaultUser3214's question :-) And, yes, I think it's CPU execution question.

@jdduke
Copy link
Member

jdduke commented Jul 12, 2019

the image_tensor error from above only appears in the tfMobile App but in the tfLite demo app it could be resolved by using your link. Is tfMobile only able to use protobuf files and this is the reason for the error?

TFMobile consumes the frozen graphs (.pb) files, whereas TFLite consumes converted flatbuffer (.tflite) files. They are incompatible and not interchangeable. Do you have a specific question for TensorFlow Lite execution?

@jdduke jdduke added the stat:awaiting response Status - Awaiting response from author label Jul 12, 2019
@tensorflowbutler
Copy link
Member

We are closing this issue for now due to lack of activity. Please comment if this is still an issue for you. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues stat:awaiting response Status - Awaiting response from author
Projects
None yet
Development

No branches or pull requests

8 participants