Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exception encountered: Unrecognized keyword arguments: ['batch_shape'] #66381

Closed
MuhammadBilal848 opened this issue Apr 24, 2024 · 6 comments
Closed
Assignees
Labels
comp:keras Keras related issues stat:awaiting response Status - Awaiting response from author TF 2.16 TFLiteConverter For issues related to TFLite converter

Comments

@MuhammadBilal848
Copy link

MuhammadBilal848 commented Apr 24, 2024

I tried this on tf version 2.16.1 & 2.13.0 on python 3.10 and 3.8 respectively. My model is not that big it is under an MB, but this should not cause problem.

CODE:

import tensorflow as tf

h5_model = tf.keras.models.load_model('helfen_1.h5')

converter = tf.lite.TFLiteConverter.from_keras_model(h5_model)

tflite_model = converter.convert()

with open('converted_model.tflite', 'wb') as f:
f.write(tflite_model)

ERROR:
2024-04-24 23:00:17.022540: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
File "t.py", line 3, in
h5_model = tf.keras.models.load_model('helfen_1.h5')
File "f:\Projects\Conv\convert\lib\site-packages\keras\src\saving\saving_api.py", line 238, in load_model
return legacy_sm_saving_lib.load_model(
File "f:\Projects\Conv\convert\lib\site-packages\keras\src\utils\traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "f:\Projects\Conv\convert\lib\site-packages\keras\src\engine\base_layer.py", line 870, in from_config
raise TypeError(
TypeError: Error when deserializing class 'InputLayer' using config={'batch_shape': [None, 50], 'dtype': 'float32', 'sparse': False, 'name': 'input_layer_10'}.

Exception encountered: Unrecognized keyword arguments: ['batch_shape']

@MuhammadBilal848 MuhammadBilal848 added the TFLiteConverter For issues related to TFLite converter label Apr 24, 2024
@TeachVoyager
Copy link

My thinking process onto this Error is probably due to multiple issues. I can lend a few ideas to the situation and anyone is free to correct me, if I'm wrong about it.

First, I would check to see if the TenserFlow compiler is updated to the versions you're using. Maybe it could cause these issues. As it did suggest rebuild TenserFlow with appropriate compiler.

Second, I would double check to see if the versions are even compatible. That could be causing issues.

@SuryanarayanaY SuryanarayanaY added comp:keras Keras related issues TF 2.16 labels Apr 25, 2024
@SuryanarayanaY
Copy link
Collaborator

Hi @MuhammadBilal848 ,

Since TF2.16 uses Keras3 by default.In Keras3 saving to the TF SavedModel format via model.save() is no longer supported in Keras 3. Please refer to migration guide for some more details.

@MuhammadBilal848
Copy link
Author

So I trained another just to check and used model.export("FOLDER_NAME") instead model.save("model.h5").
Got this as output,
image

and the folder is saved with assets , variables , pb file and fingerprint:

image

I load the model using tf.keras.layers.TFSMLayer("FOLDER_NAME", call_endpoint="serving_default")

It worked. @SuryanarayanaY Thank you 🖤

@MuhammadBilal848
Copy link
Author

MuhammadBilal848 commented Apr 25, 2024

Hi @MuhammadBilal848 ,

Since TF2.16 uses Keras3 by default.In Keras3 saving to the TF SavedModel format via model.save() is no longer supported in Keras 3. Please refer to migration guide for some more details.

Also could you tell me how can I convert the pd model to h5? I want to convert the model to tflite.

I tried using this:
image

Got:

To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
W0000 00:00:1714042377.283703    4504 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
W0000 00:00:1714042377.284215    4504 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
2024-04-25 15:52:57.285827: I tensorflow/cc/saved_model/reader.cc:83] Reading SavedModel from: C:\Users\Bilal\AppData\Local\Temp\tmphhpbtz_x
2024-04-25 15:52:57.287069: I tensorflow/cc/saved_model/reader.cc:51] Reading meta graph with tags { serve }
2024-04-25 15:52:57.287242: I tensorflow/cc/saved_model/reader.cc:146] Reading SavedModel debug info (if present) from: C:\Users\Bilal\AppData\Local\Temp\tmphhpbtz_x
2024-04-25 15:52:57.300472: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled
2024-04-25 15:52:57.306249: I tensorflow/cc/saved_model/loader.cc:234] Restoring SavedModel bundle.
2024-04-25 15:52:57.378809: I tensorflow/cc/saved_model/loader.cc:218] Running initialization op on SavedModel bundle at path: C:\Users\Bilal\AppData\Local\Temp\tmphhpbtz_x
2024-04-25 15:52:57.391327: I tensorflow/cc/saved_model/loader.cc:317] SavedModel load for tags { serve }; Status: success: OK. Took 105497 microseconds.
2024-04-25 15:52:57.412351: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
Traceback (most recent call last):
  File "F:\Projects\Trigger Word Detection\converter.py", line 10, in <module>
    tflite_model = converter.convert()
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\lite.py", line 1175, in wrapper
    return self._convert_and_export_metrics(convert_func, *args, **kwargs)
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\lite.py", line 1129, in _convert_and_export_metrics
    result = convert_func(self, *args, **kwargs)
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\lite.py", line 1636, in convert
    saved_model_convert_result = self._convert_as_saved_model()
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\lite.py", line 1617, in _convert_as_saved_model
    return super(TFLiteKerasModelConverterV2, self).convert(
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\lite.py", line 1407, in convert
    result = _convert_graphdef(
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 212, in wrapper
    raise converter_error from None  # Re-throws the exception.
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 205, in wrapper
    return func(*args, **kwargs)
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\convert.py", line 995, in convert_graphdef
    data = convert(
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\convert.py", line 367, in convert
    raise converter_error
tensorflow.lite.python.convert_phase.ConverterError: Could not translate MLIR to FlatBuffer.

@MuhammadBilal848
Copy link
Author

Hi @MuhammadBilal848 ,
Since TF2.16 uses Keras3 by default.In Keras3 saving to the TF SavedModel format via model.save() is no longer supported in Keras 3. Please refer to migration guide for some more details.

Also could you tell me how can I convert the pd model to h5? I want to convert the model to tflite.

I tried using this: image

Got:

To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
W0000 00:00:1714042377.283703    4504 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
W0000 00:00:1714042377.284215    4504 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
2024-04-25 15:52:57.285827: I tensorflow/cc/saved_model/reader.cc:83] Reading SavedModel from: C:\Users\Bilal\AppData\Local\Temp\tmphhpbtz_x
2024-04-25 15:52:57.287069: I tensorflow/cc/saved_model/reader.cc:51] Reading meta graph with tags { serve }
2024-04-25 15:52:57.287242: I tensorflow/cc/saved_model/reader.cc:146] Reading SavedModel debug info (if present) from: C:\Users\Bilal\AppData\Local\Temp\tmphhpbtz_x
2024-04-25 15:52:57.300472: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled
2024-04-25 15:52:57.306249: I tensorflow/cc/saved_model/loader.cc:234] Restoring SavedModel bundle.
2024-04-25 15:52:57.378809: I tensorflow/cc/saved_model/loader.cc:218] Running initialization op on SavedModel bundle at path: C:\Users\Bilal\AppData\Local\Temp\tmphhpbtz_x
2024-04-25 15:52:57.391327: I tensorflow/cc/saved_model/loader.cc:317] SavedModel load for tags { serve }; Status: success: OK. Took 105497 microseconds.
2024-04-25 15:52:57.412351: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
Traceback (most recent call last):
  File "F:\Projects\Trigger Word Detection\converter.py", line 10, in <module>
    tflite_model = converter.convert()
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\lite.py", line 1175, in wrapper
    return self._convert_and_export_metrics(convert_func, *args, **kwargs)
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\lite.py", line 1129, in _convert_and_export_metrics
    result = convert_func(self, *args, **kwargs)
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\lite.py", line 1636, in convert
    saved_model_convert_result = self._convert_as_saved_model()
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\lite.py", line 1617, in _convert_as_saved_model
    return super(TFLiteKerasModelConverterV2, self).convert(
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\lite.py", line 1407, in convert
    result = _convert_graphdef(
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 212, in wrapper
    raise converter_error from None  # Re-throws the exception.
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 205, in wrapper
    return func(*args, **kwargs)
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\convert.py", line 995, in convert_graphdef
    data = convert(
  File "F:\Projects\Trigger Word Detection\twd\lib\site-packages\tensorflow\lite\python\convert.py", line 367, in convert
    raise converter_error
tensorflow.lite.python.convert_phase.ConverterError: Could not translate MLIR to FlatBuffer.

I tried this method and it worked:
image

@SuryanarayanaY
Copy link
Collaborator

Hi @MuhammadBilal848 ,

Thanks for confirmation and happy that it worked. Could you please mark this issue as closed. Thanks!

@SuryanarayanaY SuryanarayanaY added the stat:awaiting response Status - Awaiting response from author label Apr 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:keras Keras related issues stat:awaiting response Status - Awaiting response from author TF 2.16 TFLiteConverter For issues related to TFLite converter
Projects
None yet
Development

No branches or pull requests

3 participants