Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using convert_weights_to_tf_lite.py did not produce the same results as the pre-training model. #68

Open
FrozenEelFanGirl opened this issue Mar 9, 2023 · 3 comments

Comments

@FrozenEelFanGirl
Copy link

I used this project to retrain on DNS-challenge dataset. But When I finished the training model, I tried to convert the model using convert_weights_to_tf_lite.py, but found that the model I converted(_1.tflite 372k, _2.tflite 641k) did not match the pre-training model(model_quant_1.tflite 369k, model_quant_2.tflite 635k). So I tried to convert pre-training model(model.h5 and other two h5 models) to tflite, they did not match the pre-training model either. Is there any suggestion for me? I wonder why I could not convert pre-training model to get consistent results. I used tf 2.10.0 which should be the last version to be supported on windows and just tried cpu to convert.

@FrozenEelFanGirl
Copy link
Author

I tried to check the model information of pretrained quant_model and converted quant_model, and they're totally different:
There is pretrained quant_model:
Input details:
[{'name': 'input_2', 'index': 0, 'shape': array([ 1, 1, 257]), 'shape_signature': array([ 1, 1, 257]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'input_3', 'index': 1, 'shape': array([ 1, 2, 128, 2]), 'shape_signature': array([ 1, 2, 128, 2]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
Output details:
[{'name': 'Identity', 'index': 66, 'shape': array([ 1, 1, 257]), 'shape_signature': array([ 1, 1, 257]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'Identity_1', 'index': 69, 'shape': array([ 1, 2, 128, 2]), 'shape_signature': array([ 1, 2, 128, 2]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
Model details:
Name: input_2
Type: <class 'numpy.float32'>
Shape: [ 1 1 257]
Quantization Parameters: (0.0, 0)
..... and more

There is converted quant_model:
nput details:
[{'name': 'serving_default_input_13:0', 'index': 0, 'shape': array([ 1, 2, 128, 2]), 'shape_signature': array([ 1, 2, 128, 2]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'serving_default_input_12:0', 'index': 1, 'shape': array([ 1, 1, 257]), 'shape_signature': array([ 1, 1, 257]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
Output details:
[{'name': 'StatefulPartitionedCall:0', 'index': 64, 'shape': array([ 1, 1, 257]), 'shape_signature': array([ 1, 1, 257]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'StatefulPartitionedCall:1', 'index': 69, 'shape': array([ 1, 2, 128, 2]), 'shape_signature': array([ 1, 2, 128, 2]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
Model details:
Name: serving_default_input_13:0
Type: <class 'numpy.float32'>
Shape: [ 1 2 128 2]
Quantization Parameters: (0.0, 0)
..... and more

@StuartIanNaylor
Copy link

#52 (comment)

@FrozenEelFanGirl
Copy link
Author

Thanks for your help, I will try to fix this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants