Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot convert between a TensorFlowLite buffer with XXX bytes and a ByteBuffer with XXX bytes #6

Open
moster67 opened this issue Aug 28, 2018 · 22 comments

Comments

@moster67
Copy link

Hi, many thanks for the article and the sample-code.

It works fine with the model mentioned in your project. However, I trained my own model (using the tensorflow-for-poets 1 and 2 tutorials) but I get an error using my model with your code:

"Cannot convert between a TensorFlowLite buffer with XXX bytes and a ByteBuffer with XXX bytes."

This happens when running the following statement:
interpreter.run(byteBuffer, result);

My model works fine with the sample-project in "the tensorflow-for-poets2 tutorial.

Just wondering what can be the issue. Any ideas?

Thanks.

@naris96
Copy link

naris96 commented Sep 2, 2018

Same problem here. I tried all the steps and guidelines in these links but still, nothing seems to work...
1.tensorflow/tensorflow#14719 (comment)
2.tensorflow/tensorflow#14719

@moster67
Copy link
Author

moster67 commented Sep 3, 2018

I resolved it by using this modified class:
https://github.com/COSE471/COSE471_android/blob/master/app/src/main/java/com/example/android/alarmapp/tflite/TensorFlowImageClassifier.java

@naris96
Copy link

naris96 commented Sep 4, 2018

@moster67 Still isn't working. I have the similar code approach as well. I trained my own model. It's funny that same number and layer in a CNN architecture on different image optimization give different results.

@EXJUSTICE
Copy link

EXJUSTICE commented Sep 13, 2018

I resolved it by using this modified class:
https://github.com/COSE471/COSE471_android/blob/master/app/src/main/java/com/example/android/alarmapp/tflite/TensorFlowImageClassifier.java

Could you elaborate your solution? Stuck on the same problem, and i don't see what exactly we are replacing here.

Nevermind, solved it by noting that with floats we are using 4 times as many bytes

@divSivasankaran
Copy link

@EXJUSTICE you are right.. changing the value type from int to float fixed it for me

@Tanv33rA
Copy link

@Div1090 will you please elaborate specifically what values are to be changed ?

@divSivasankaran
Copy link

divSivasankaran commented Nov 15, 2018

@Tanv33rA In the function convertBitmapToByteBuffer

  • remember that we would need 4 bytes for each value if our datatype is float
    Replace
    ByteBuffer byteBuffer = ByteBuffer.allocateDirect(BATCH_SIZE * inputSize * inputSize * PIXEL_SIZE);
    with
    ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4 * BATCH_SIZE * inputSize * inputSize * PIXEL_SIZE);

  • Also, there's a separate function to add float values to the byte buffer.
    Replace byteBuffer.put with byteBuffer.putFloat.

This ought to fix the problem!

@soum-io
Copy link
Contributor

soum-io commented Feb 5, 2019

My recent PR has added support for float models. Simply change the variable QUANT to false in TensorFlowImageClassifier.java along with changing the model and lebels name.

@SergeyKarleev
Copy link

SergeyKarleev commented Feb 6, 2019

Check which values ​​return methods getImageSizeX() and getImageSizeY() in your ImageClassifier class.
And compare them with the MobileNet heading of the model you are using as pre-trained model (https://www.tensorflow.org/lite/models)

For example, for model Mobilenet_V1_0.25_192 the following constant values ​​should be set
static final int DIM_IMG_SIZE_X = 192;
static final int DIM_IMG_SIZE_Y = 192;

getImageSizeX() = 192
getImageSizeY() = 192

@krishnachourasia
Copy link

thank you so much @soum-io
now it works perfectly fine

@saurabhg476
Copy link

I was facing the exact same issue with values:
"cannot convert between a tensorflow lite buffer with 602112 bytes and a bytebuffer with 150528 bytes"

So the problem was that i converted my mobilenet model with python api (for tf lite conversion) but when i used the command line api for the same, it worked.

Hope it helps.

The command line api is available at :

https://www.tensorflow.org/lite/convert/cmdline_examples

@dvbeelen
Copy link

dvbeelen commented Aug 29, 2019

I also downloaded the Tensorflow for Poets 2 github repository. While I was trying to place my graph and labels into the tflite-app, I got this error.

I resolved the issue by following @SergeyKarleev's answer. For me, changing the static final int DIM_IMG_SIZE_X and static final int DIM_IMG_SIZE_Y values to 299 was the answer. These values can be found in the ImageClassifier.java document in the android folder.

I guess that not setting the IMG_SIZE correctly while following the tutorial is what causes the issue.

I don't know if this issue is still unresolved, but I thought I'd share my fix anyway.
Hope it helps.

@NightFury13
Copy link

NightFury13 commented Oct 1, 2019

I created a custom model using Google Vision API and the expected input size for that model was 512x512 as opposed to the 300x300 in MobileSSDNet (default with TFLite example). I changed private static final int TF_OD_API_INPUT_SIZE = 512; in DetectorActivity.java and also updated private static final int NUM_DETECTIONS = 20; in TFLiteObjectDetectionAPIModel.java. These two amends solved it for me. Hope it helps someone in the future.

@xenogew
Copy link

xenogew commented Oct 17, 2019

I use the code base and found same issue from I do download my trained model from Azure Custom Vision to use instead of default model of the project defined. I would like to tell you first here, I'm newbie to ML and Tensorflow thing.

URL
https://github.com/xenogew/examples/tree/master/lite/examples/object_detection/android

Yes, I know this is Object Detection, not Image classification like this issue. But What I google and only found is here the best fit with keyword.

Here is error message.
Cannot convert between a TensorFlowLite buffer with 2076672 bytes and a Java Buffer with 270000 bytes.

I tried to read and do along with all yours comment and found this row in the code.
d.imgData = ByteBuffer.allocateDirect(1 * d.inputSize * d.inputSize * 3 * numBytesPerChannel);

the value in formula is reflect with the later number of error message. I tried to change to exactly number of former number in error message but found a new error.
Cannot copy between a TensorFlowLite tensor with shape [1, 13, 13, 55] and a Java object with shape [1, 10, 4].

So, how can I change to my own model I trained in Azure to change on the default model in example and can use it without error?
Which code I should change to make the application to be executable.

@smone000
Copy link

smone000 commented Feb 4, 2020

THE SAME ERROR :((

Models (both as float and quantizied) have been constructed with the codes at:
https://github.com/frogermcs/TFLite-Tester/blob/master/notebooks/Testing_TFLite_model.ipynb

However, the app give this error:
"cannot convert between a tensorflow lite buffer with 602112 bytes and a bytebuffer with 150528 bytes"

PLEASE HELP ME TO SOLVE THIS ISSUE :(((

@harsh204016
Copy link

I try all the above steps but not working
java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 3136 bytes and a Java Buffer with 9408 bytes.
image
I have changed the model , labels and input size ...
@soum-io @moster67 @SergeyKarleev

@SolArabehety
Copy link

@xenogew I have exactly the same error. Did you find the solution?

@yogithesymbian
Copy link

yogithesymbian commented Aug 24, 2020

i also got

Process: org.tensorflow.lite.examples.detection, PID: 30742
    java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (input_2) with 602112 bytes from a Java Buffer with 1080000 bytes.

i have try change private static final boolean TF_OD_API_IS_QUANTIZED = false; from true to false

line error

at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:196)
        at org.tensorflow.lite.examples.detection.DetectorActivity$2.run(DetectorActivity.java:181)

when i try change to float like

    tfLite.runForMultipleInputsOutputs(new float[][]{new float[]{Float.parseFloat(Arrays.toString(inputArray))}}, outputMap);

i got new problem
java.lang.NumberFormatException: For input string: "[java.nio.DirectByteBuffer[pos=1080000 lim=1080000 cap=1080000]]"

@pkpdeveloper
Copy link

pkpdeveloper commented Aug 29, 2020

Hello guys,
I also had the similar problem yesterday. I would like to mention solution which works for me.

Seems like TSLite only support exact square bitmap inputs
Like
size 256* 256 detection working
size 256* 255 detection not working throwing exception

And max size which is supported
257*257 should be max width and height for any bitmap input

Here is the sample code to crop and resize bitmap

private var MODEL_HEIGHT = 257 private var MODEL_WIDTH = 257

Crop bitmap
val croppedBitmap = cropBitmap(bitmap)

Created scaled version of bitmap for model input
val scaledBitmap = Bitmap.createScaledBitmap(croppedBitmap, MODEL_WIDTH, MODEL_HEIGHT, true)

https://github.com/tensorflow/examples/blob/master/lite/examples/posenet/android/app/src/main/java/org/tensorflow/lite/examples/posenet/PosenetActivity.kt#L578

Crop Bitmap to maintain aspect ratio of model input.
`private fun cropBitmap(bitmap: Bitmap): Bitmap {
val bitmapRatio = bitmap.height.toFloat() / bitmap.width
val modelInputRatio = MODEL_HEIGHT.toFloat() / MODEL_WIDTH
var croppedBitmap = bitmap

// Acceptable difference between the modelInputRatio and bitmapRatio to skip cropping.
val maxDifference = 1e-5

// Checks if the bitmap has similar aspect ratio as the required model input.
when {
  abs(modelInputRatio - bitmapRatio) < maxDifference -> return croppedBitmap
  modelInputRatio < bitmapRatio -> {
    // New image is taller so we are height constrained.
    val cropHeight = bitmap.height - (bitmap.width.toFloat() / modelInputRatio)
    croppedBitmap = Bitmap.createBitmap(
      bitmap,
      0,
      (cropHeight / 2).toInt(),
      bitmap.width,
      (bitmap.height - cropHeight).toInt()
    )
  }
  else -> {
    val cropWidth = bitmap.width - (bitmap.height.toFloat() * modelInputRatio)
    croppedBitmap = Bitmap.createBitmap(
      bitmap,
      (cropWidth / 2).toInt(),
      0,
      (bitmap.width - cropWidth).toInt(),
      bitmap.height
    )
  }
}
return croppedBitmap

}
https://github.com/tensorflow/examples/blob/master/lite/examples/posenet/android/app/src/main/java/org/tensorflow/lite/examples/posenet/PosenetActivity.kt#L451`

Hope it helps
Thanks and Regards
Pankaj

@krn-sharma
Copy link

@pkpdeveloper can you give me source about the max size supported?
Thanks

@pk-development
Copy link

pk-development commented Oct 24, 2020

This worked for me
image

Did not dig too deep into why it works but I got back a prediction from my model

Edit the method in this file to the one below

https://github.com/COSE471/COSE471_android/blob/master/app/src/main/java/com/example/android/alarmapp/tflite/TensorFlowImageClassifier.java

image

Edit: remove the D typo in the image above (Must have pressed a key when copying screen)

  • I trained my model with TensorFlow
  • Converted it on python with tf.lite.TFLiteConverter.from_saved_model
  • Copied the tflite file over to the android assets folder
  • Create a file called labels.txt (Holds cat and dog) and put that in the assets folder
  • Found an image of a cat online convert to 128 x 128 because this is what my model used
  • Copied that file into my drawable folder
  • Used the code you see above.
  • Worked like a charm:-)

Hopefully, that helps someone

@kartikeysaran
Copy link

Resizing the Buffer works for me

Bitmap resized = Bitmap.createScaledBitmap(bitmap, 300, 300, true);

here 300, 300 is the size of the input matrix of the model

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests