Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IllegalStateException: ERROR: Unable to run session Expects arg[0] to be uint8 but int32 is provided || Unsupported data type: UBYTE #980

Open
CodeCombiner opened this issue May 2, 2020 · 5 comments

Comments

@CodeCombiner
Copy link

CodeCombiner commented May 2, 2020

Trying to run RunGraphExample frozen_graph.pb on trained model Faster-RCNN with Tensorflow 1.13.1 , org.deeplearning4j "1.0.0-beta6" (Tensorflow 1.15)

val data: Array[Array[Int]] = new Array[Array[Int]](img.getWidth * img.getHeight);
    for (i <- 0 until img.getWidth) {
      for (j <- 0 until img.getHeight()) {
        val ar: Array[Int] = new Array(3)
        ar(0) = color.getRed.byteValue() & 0xff
        ar(1) = color.getGreen.byteValue() & 0xff
        ar(2) = color.getBlue.byteValue() & 0xff
        data(i * img.getHeight + j) = ar
      }
    }
var arr: INDArray = Nd4j.createFromArray(data) 
//.castTo( org.nd4j.linalg.api.buffer.DataType.UBYTE)

Error appears at line inputMap.put(inputs.get(0), shapedArray)

if shapedArray is int then error is :

Unable to run session Expects arg[0] to be uint8 but int32 is provided

If shapedArray is short or .castTo(UBYTE) then error is :

Unsupported data type: UBYTE
or
Unsupported data type: SHORT

https://github.com/eclipse/deeplearning4j-examples/blob/master/tf-import-examples/src/main/java/org/nd4j/examples/RunGraphExample.java

@CodeCombiner CodeCombiner changed the title java.lang.IllegalArgumentException: Unable to parse protobuf when "RunGraphExample" IllegalStateException: ERROR: Unable to run session Expects arg[0] to be uint8 but int32 is provided || Unsupported data type: UBYTE May 3, 2020
@raver119
Copy link

raver119 commented May 3, 2020

Can you post full stack trace please?

@CodeCombiner
Copy link
Author

Available compressors: [THRESHOLD] [NOOP] [GZIP] 
[2020-05-03 14:28:15,480] [INFO] [org.nd4j.linalg.factory.Nd4jBackend] [main] [] - Loaded [CpuBackend] backend
[2020-05-03 14:28:16,082] [INFO] [org.nd4j.nativeblas.NativeOpsHolder] [main] [] - Number of threads used for linear algebra: 1
[2020-05-03 14:28:16,086] [WARN] [org.nd4j.linalg.cpu.nativecpu.CpuNDArrayFactory] [main] [] - *********************************** CPU Feature Check Warning ***********************************
[2020-05-03 14:28:16,086] [WARN] [org.nd4j.linalg.cpu.nativecpu.CpuNDArrayFactory] [main] [] - Warning: Initializing ND4J with Generic x86 binary on a CPU with AVX/AVX2 support
[2020-05-03 14:28:16,086] [WARN] [org.nd4j.linalg.cpu.nativecpu.CpuNDArrayFactory] [main] [] - Using ND4J with AVX/AVX2 will improve performance. See deeplearning4j.org/cpu for more details
[2020-05-03 14:28:16,086] [WARN] [org.nd4j.linalg.cpu.nativecpu.CpuNDArrayFactory] [main] [] - Or set environment variable ND4J_IGNORE_AVX=true to suppress this warning
[2020-05-03 14:28:16,086] [WARN] [org.nd4j.linalg.cpu.nativecpu.CpuNDArrayFactory] [main] [] - *************************************************************************************************
[2020-05-03 14:28:16,267] [INFO] [org.nd4j.nativeblas.Nd4jBlas] [main] [] - Number of threads used for OpenMP BLAS: 4
[2020-05-03 14:28:16,384] [INFO] [org.nd4j.linalg.api.ops.executioner.DefaultOpExecutioner] [main] [] - Backend used: [CPU]; OS: [Mac OS X]
[2020-05-03 14:28:16,384] [INFO] [org.nd4j.linalg.api.ops.executioner.DefaultOpExecutioner] [main] [] - Cores: [8]; Memory: [4.0GB];
[2020-05-03 14:28:16,384] [INFO] [org.nd4j.linalg.api.ops.executioner.DefaultOpExecutioner] [main] [] - Blas vendor: [OPENBLAS]
Rank: 3, DataType: INT, Offset: 0, Order: c, Shape: [180,216,3],  Stride: [648,3,1]
Rank: 4, DataType: INT, Offset: 0, Order: c, Shape: [1,180,216,3],  Stride: [116640,648,3,1]
2020-05-03 14:28:20.988445: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations:  AVX2 FMA
To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
2020-05-03 14:28:20.988723: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 8. Tune using inter_op_parallelism_threads for best performance.
java.lang.IllegalStateException: ERROR: Unable to run session Expects arg[0] to be uint8 but int32 is provided

Process finished with exit code 130 (interrupted by signal 2: SIGINT)

@CodeCombiner
Copy link
Author

CodeCombiner commented May 3, 2020

Source code of Scala function:

  def dlfjRunGraph = {
    import org.nd4j.linalg.api.ndarray.INDArray
    import org.nd4j.tensorflow.conversion.graphrunner.GraphRunner
    import java.io.FileInputStream
    import java.util
    val shapedArray = imageToShapedArrray("model/17part1x3.jpg")

    val inputs = util.Arrays.asList("image_tensor:0")
    val content = IOUtils.toByteArray(new FileInputStream(new File("model/frozen_inference_graph.pb")))
    try {
      val graphRunner = GraphRunner.builder.graphBytes(content).inputNames(inputs).build
      try {
        val inputMap: util.HashMap[String,INDArray] = new util.HashMap[String,INDArray]()
        inputMap.put(inputs.get(0), shapedArray)
        val run = graphRunner.run(inputMap)
        System.out.println("Run result " + run)
      } finally if (graphRunner != null) graphRunner.close()
    }
    catch {
      case e: Exception => System.out.println(e.toString)
    }
  }

   def imageToShapedArrray(filepath: String): INDArray = {
    var file = new File(filepath)
    if (!file.exists) file = new File(filepath)
    val img = ImageIO.read(file)

    val data: Array[Array[Int]] = new Array[Array[Int]](img.getWidth * img.getHeight);
    for (i <- 0 until img.getWidth) {
      for (j <- 0 until img.getHeight()) {
        val color = new Color(img.getRGB(i, j))
        val ar: Array[Int] = new Array(3)
        ar(0) = color.getRed.byteValue() & 0xff
        ar(1) = color.getGreen.byteValue() & 0xff
        ar(2) = color.getBlue.byteValue() & 0xff
        data(i * img.getHeight + j) = ar
      }
    }

    var arr: INDArray = Nd4j.createFromArray(data)
      /*.castTo( org.nd4j.linalg.api.buffer.DataType.UBYTE)*/
      .reshape(img.getHeight, img.getWidth, 3)
    var shapeInfo = arr.shapeInfoToString()
    System.out.println(shapeInfo)
    arr = Nd4j.expandDims(arr, 0)
    shapeInfo = arr.shapeInfoToString()
    System.out.println(shapeInfo)
    arr
  }

@CodeCombiner
Copy link
Author

CodeCombiner commented May 3, 2020

Python Object detection working source code:

######## Image Object Detection Using Tensorflow-trained Classifier #########
#
# Author: Evan Juras
# Date: 1/15/18
# Description: 
# This program uses a TensorFlow-trained classifier to perform object detection.
# It loads the classifier uses it to perform object detection on an image.
# It draws boxes and scores around the objects of interest in the image.

## Some of the code is copied from Google's example at
## https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb

## and some is copied from Dat Tran's example at
## https://github.com/datitran/object_detector_app/blob/master/object_detection_app.py

## but I changed it to make it more understandable to me.

# Import packages
import os
import cv2
import numpy as np
import tensorflow as tf
import sys
import logging
import pickle

# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")

# Import utilites
from utils import label_map_util
from utils import visualization_utils as vis_util

# Name of the directory containing the object detection module we're using
MODEL_NAME = 'inference_graph'
IMAGE_NAME = '17part1x3.jpg'

# Grab path to current working directory
CWD_PATH = os.getcwd()

# Path to frozen detection graph .pb file, which contains the model that is used
# for object detection.
PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,'frozen_inference_graph.pb')

# Path to label map file
PATH_TO_LABELS = os.path.join(CWD_PATH,'training','labelmap.pbtxt')

# Path to image
PATH_TO_IMAGE = os.path.join(CWD_PATH,IMAGE_NAME)

# Number of classes the object detector can identify
NUM_CLASSES = 17

# Load the label map.
# Label maps map indices to category names, so that when our convolution
# network predicts `5`, we know that this corresponds to the label.
# Here we use internal utility functions, but anything that returns a
# dictionary mapping integers to appropriate string labels would be fine
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)

# Load the Tensorflow model into memory.
detection_graph = tf.Graph()
with detection_graph.as_default():
    od_graph_def = tf.GraphDef()
    with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
        serialized_graph = fid.read()
        od_graph_def.ParseFromString(serialized_graph)
        tf.import_graph_def(od_graph_def, name='')

    sess = tf.Session(graph=detection_graph)

# Define input and output tensors (i.e. data) for the object detection classifier

# Input tensor is the image
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')

# Output tensors are the detection boxes, scores, and classes
# Each box represents a part of the image where a particular object was detected
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')

# Each score represents level of confidence for each of the objects.
# The score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')

# Number of objects detected
num_detections = detection_graph.get_tensor_by_name('num_detections:0')

# Load image using OpenCV and
# expand image dimensions to have shape: [1, None, None, 3]
# i.e. a single-column array, where each item in the column has the pixel RGB value
image = cv2.imread(PATH_TO_IMAGE)
image_expanded = np.expand_dims(image, axis=0)


# Perform the actual detection by running the model with the image as input
(boxes, scores, classes, num) = sess.run(
    [detection_boxes, detection_scores, detection_classes, num_detections],
    feed_dict={image_tensor: image_expanded})

# Draw the results of the detection (aka 'visulaize the results')

vis_util.visualize_boxes_and_labels_on_image_array(
    image,
    np.squeeze(boxes),
    np.squeeze(classes).astype(np.int32),
    np.squeeze(scores),
    category_index,
    use_normalized_coordinates=True,
    line_thickness=2,
    min_score_thresh=0.20)

# All the results have been drawn on image. Now display the image.
cv2.imshow('Object detector',image)





# Press any key to close the image
cv2.waitKey(0)

# Clean up
cv2.destroyAllWindows()

@raver119
Copy link

raver119 commented May 3, 2020

@agibsonccc ^^^

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants