Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to load saved model #107

Open
hzhaoc opened this issue Nov 26, 2020 · 13 comments
Open

Unable to load saved model #107

hzhaoc opened this issue Nov 26, 2020 · 13 comments

Comments

@hzhaoc
Copy link

hzhaoc commented Nov 26, 2020

Unable to load saved model

Unable to load saved model.

Steps to Reproduce

  1. First step
import tensorflow.compat.v1 as tf  # version 2.3.1
tf.enable_resource_variables()
tf.disable_v2_behavior()
  1. Second step
new_model = tf.keras.models.load_model('./model/COVIDNet-CXR-Small/savedModel')
  1. Third step

Expected behavior

type(model)

output": tensorflow.python.keras.engine.sequential.Sequential

Actual behavior

ValueError: Node 'gradients/post_bn/cond/FusedBatchNorm_grad/FusedBatchNormGrad' has an _output_shapes attribute inconsistent with the GraphDef for output #3: Dimension 0 in both shapes must be equal, but are 0 and 2048. Shapes are [0] and [2048].

Environment

  • python 3.6
    tensorflow 2.3.1
@HUI11126
Copy link

Hello. Have you solved the error?

@chododom
Copy link

chododom commented Feb 24, 2021

I have the same exact issue with loading any of the COVID-Net CXR models.

@kapilb7
Copy link

kapilb7 commented Feb 28, 2021

I have the same exact issue with loading any of the COVID-Net CXR models.

I'm facing the same problem...

@chododom
Copy link

I have managed to load the model as a graph using the code provided in inference.py (import_meta_graph function).

@kapilb7
Copy link

kapilb7 commented Feb 28, 2021

I have managed to load the model as a graph using the code provided in inference.py (import_meta_graph function).

Oh, can you just send the code of inference.py. I'm actually a beginner, so I'm not sure whether the file arguments for the parser I'm inputting is correct... I actually want a .tflite format model.

@chododom
Copy link

chododom commented Feb 28, 2021

The only thing I really changed was the way I pass the arguments, I created a dictionary containing the info about the model (default info that can be seen in the inference.py script was used).

args = {
    'name': 'COVIDNet-CXR4-A',
    'weightspath': '/path_to_model_directory/COVIDNet-CXR4-A',
    'metaname': 'model.meta',
    'ckptname': 'model-18540',
    'in_tensorname': 'input_1:0',
    'out_tensorname': 'norm_dense_1/Softmax:0',
    'input_size': 480,
    'top_percent': 0.08
}

Then it is basically the same as follows:

new_graph = tf.Graph()  
with tf.Session(graph=new_graph) as sess:  
    tf.get_default_graph()
    saver = tf.train.import_meta_graph(os.path.join(args['weightspath'], args['metaname']))
    saver.restore(sess, os.path.join(args['weightspath'], args['ckptname']))
    covid_net = tf.get_default_graph()

    mapping = {0: 'negative', 1: 'positive'}

    image_tensor = graph.get_tensor_by_name(args['in_tensorname'])
    pred_tensor = graph.get_tensor_by_name(args['out_tensorname'])
    x = process_image_file(img_path, args['top_percent'], args['input_size'])
    x = x.astype('float32') / 255.0

    pred = sess.run(pred_tensor, feed_dict={image_tensor: np.expand_dims(x, axis=0)})
    # Combining pneumonia and covid predictions into single pneumonia prediction.
    pred_pneumonia = np.array([pred[0][0], np.max([pred[0][1], pred[0][2]])])
    pred_pneumonia = pred_pneumonia / np.sum(pred_pneumonia)

    print('Prediction: {}'.format(mapping[pred_pneumonia.argmax()]))
    print('Confidence')
    print('Negative (Normal): {:.3f}, Positive (Pneumonia): {:.3f}'.format(pred_pneumonia[0], pred_pneumonia[1]))

Mind you, this code is prepared to fit my specific needs, so it is a binary prediction of COVID-19 negative or positive samples.
Hope this helps ;)

@kapilb7
Copy link

kapilb7 commented Feb 28, 2021

The only thing I really changed was the way I pass the arguments, I created a dictionary containing the info about the model (default info that can be seen in the inference.py script was used).

args = {
'name': 'COVIDNet-CXR4-A',
'weightspath': '/path_to_model_directory/COVIDNet-CXR4-A',
'metaname': 'model.meta',
'ckptname': 'model-18540',
'in_tensorname': 'input_1:0',
'out_tensorname': 'norm_dense_1/Softmax:0',
'input_size': 480,
'top_percent': 0.08
}
Then it is basically the same as follows:

new_graph = tf.Graph()
with tf.Session(graph=new_graph) as sess:
tf.get_default_graph()
saver = tf.train.import_meta_graph(os.path.join(args['weightspath'], args['metaname']))
saver.restore(sess, os.path.join(args['weightspath'], args['ckptname']))
covid_net = tf.get_default_graph()

mapping = {0: 'negative', 1: 'positive'}

image_tensor = graph.get_tensor_by_name(args['in_tensorname'])
pred_tensor = graph.get_tensor_by_name(args['out_tensorname'])
x = process_image_file(img_path, args['top_percent'], args['input_size'])
x = x.astype('float32') / 255.0

pred = sess.run(pred_tensor, feed_dict={image_tensor: np.expand_dims(x, axis=0)})
# Combining pneumonia and covid predictions into single pneumonia prediction.
pred_pneumonia = np.array([pred[0][0], np.max([pred[0][1], pred[0][2]])])
pred_pneumonia = pred_pneumonia / np.sum(pred_pneumonia)

print('Prediction: {}'.format(mapping[pred_pneumonia.argmax()]))
print('Confidence')
print('Negative (Normal): {:.3f}, Positive (Pneumonia): {:.3f}'.format(pred_pneumonia[0], pred_pneumonia[1]))

Mind you, this code is prepared to fit my specific needs, so it is a binary prediction of COVID-19 negative or positive samples.
Hope this helps ;)

I want to try it with the COVIDNet-CXR-Large

This is the Inference.py file:

import tensorflow.compat.v1 as tf  # version 2.3.1
tf.enable_resource_variables()
tf.disable_v2_behavior()
import numpy as np
import os, argparse
import cv2
import data

args = {
	'name': 'COVIDNet-CXR-Large',
	'weightspath': '/Users/kapil/Documents/COVIDNet-CXR-Large',
	'metaname': '/Users/kapil/Documents/COVIDNet-CXR-Large/model.meta',
	'ckptname': 'model-8485',
	'in_tensorname': 'input_1:0',
	'out_tensorname': 'norm_dense_1/Softmax:0',
	'input_size': 480,
	'top_percent': 0.08
}

new_graph = tf.Graph()  
with tf.Session(graph = new_graph) as sess:  
	tf.get_default_graph()
	saver = tf.train.import_meta_graph(os.path.join(args['weightspath'], args['metaname']))
	saver.restore(sess, os.path.join(args['weightspath'], args['ckptname']))
	covid_net = tf.get_default_graph()
	
	mapping = {0: 'negative', 1: 'positive'}
	
	image_tensor = new_graph.get_tensor_by_name(args['in_tensorname'])
	pred_tensor = new_graph.get_tensor_by_name(args['out_tensorname'])
	x = process_image_file(img_path, args['top_percent'], args['input_size'])
	x = x.astype('float32') / 255.0
	
	pred = sess.run(pred_tensor, feed_dict={image_tensor: np.expand_dims(x, axis=0)})
	# Combining pneumonia and covid predictions into single pneumonia prediction.
	pred_pneumonia = np.array([pred[0][0], np.max([pred[0][1], pred[0][2]])])
	pred_pneumonia = pred_pneumonia / np.sum(pred_pneumonia)
	
	print('Prediction: {}'.format(mapping[pred_pneumonia.argmax()]))
	print('Confidence')
	print('Negative (Normal): {:.3f}, Positive (Pneumonia): {:.3f}'.format(pred_pneumonia[0], pred_pneumonia[1]))

But I'm getting this error:

WARNING:tensorflow:From /Users/kapil/.local/lib/python3.8/site-packages/tensorflow/python/compat/v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
2021-02-28 21:04:57.272142: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-28 21:04:57.272397: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-02-28 21:05:01.997588: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
Traceback (most recent call last):
File "Inference.py", line 31, in
pred_tensor = new_graph.get_tensor_by_name(args['out_tensorname'])
File "/Users/kapil/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 3902, in get_tensor_by_name
return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
File "/Users/kapil/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 3726, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/Users/kapil/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 3766, in _as_graph_element_locked
raise KeyError("The name %s refers to a Tensor which does not "
KeyError: "The name 'norm_dense_1/Softmax:0' refers to a Tensor which does not exist. The operation, 'norm_dense_1/Softmax', does not exist in the graph."

Please help me!!!

@chododom
Copy link

Looking at the history of the inference file, it seems the pred_tensor (out_tensor in arguments) should be called "dense_3/Softmax:0" for the CXR-Large model.

Check this file out, compare it to your case: https://github.com/lindawangg/COVID-Net/blob/46b4b00ab049f02e02e81829c5fd2cafb5caad2c/inference.py

@kapilb7
Copy link

kapilb7 commented Feb 28, 2021

Solved that issue, but I don't know why but I'm getting this now:
AttributeError: module 'data' has no attribute 'process_image_file'

Why am I getting this now?? Also, how do I actually do the inference part by comparing it with an example image??
One more thing, how do I can convert this to tflite model?

I know I'm asking too much, but please help me!!

@chododom
Copy link

Well, the inference script uses the process_image_file function to preprocess the images (I think there's some resizing and random augmentation going on). You need to import the function correctly so that the code you posted above can work. Make sure the file 'data' contains that function definition and you could try something like from data import process_image_file, or even just declare the function and the functions it calls in the same script.

I don't really know what you mean by your second question. You give the input file path to the script in the variable img_path (you can see it is the argument of the preprocessing function) and the script will print the probabilities for each class and tell you the highest one. You should know which class the image really belonged to based on the data labels.

I don't know how to convert the model to TF light, I am also a beginner and haven't encountered this, sorry.

@kapilb7
Copy link

kapilb7 commented Feb 28, 2021

Well, the inference script uses the process_image_file function to preprocess the images (I think there's some resizing and random augmentation going on). You need to import the function correctly so that the code you posted above can work. Make sure the file 'data' contains that function definition and you could try something like from data import process_image_file, or even just declare the function and the functions it calls in the same script.

I don't really know what you mean by your second question. You give the input file path to the script in the variable img_path (you can see it is the argument of the preprocessing function) and the script will print the probabilities for each class and tell you the highest one. You should know which class the image really belonged to based on the data labels.

I don't know how to convert the model to TF light, I am also a beginner and haven't encountered this, sorry.

got it working! Thanks a lot!
But I'm still not able to load the savedModel directory to convert this into tflite file, like @hzhaoc couldn't do it too...

@chododom
Copy link

You're welcome :)

Yeah the way @hzhaoc tried with Keras was also not working for me. I'm not familiar with TF/Keras enough to resolve that issue unfortunately.

@Abhishek-Prajapat
Copy link

I am getting the same error so I would like to know if anyone of you has created the Keras based save file of the mode. If so then can you please share that file with me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants