New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tensorflow] tensorflow.python.framework.errors_impl.FailedPreconditionError: Could not find variable 52/kernel. This could mean that the variable has been deleted. In TF1, it can also mean the variable is uninitialized. #21
Comments
Is "Conv2D + LeakyRelu" a supported pattern in Tensorflow? On the other hand, LPOT recognizeds Matmul in my model as quantizable, but when quantizaing it always prints |
Could u please share the model if possible? Even the partial of the model would make us debug it effectively since the pure words doesn't inspire us too much:) Would u please uninstall native tensorflow and intel-tensorflow from your env and just keep intel-tensorflow installed only? i guess the lpot didn't identify the tensorflow version correctly so it fallback its conv2d quantizable configuration to default one which didn't supports single convolution quantization. Conv2d+leakyrelu is not supported for tf2.x and tf1.15.up3 supports conv2d + biasadd+ leakyrelu fusion. For matmul quantization, the tensorflow doesn't support single matmul quantization and lpot add additional pass Did u see log like Would you please paste the full log if possible? |
Of course! Thanks again for working on such a great package! For privacy reasons, I decided to email you the model (only initialized, without training) to your inbox instead of posting it here, please check your inbox for the file, thank you. I have also tried uninstalling Tensorflow and keep only Intel-Tensorflow in the environment, but the issue still occurs.
|
ok. i saw your mail and will have a try later. |
Of course! I have sent the link to the yaml configuration/main.py/data files, please check your inbox. |
Hi, just checking in to see if there is any update on the issue. I tried to run quantization under TF after a little modification of the model (converted the model from static input shape to dynamic input shape), and similar error still occurs:
|
Hi @guomingz , just checking to see if there is any update on the issue. I have also verified that with LPOT 1.6 & Tensorflow 2.6, the issue persists. On the other hand, when I try using keras session by modifying
A different error occurs:
I want to add that the node name seems different from the one I see in Netron. In Netron, all my model names start with 'REG_Net', which is the name I assigned, but in the error message, the node name looks like it has not been renamed at all. I wonder if this is a compatibility issue? Thank you! |
Netron doesn't reflect the real model for saved model format, especially you opened the saved_model.pb.
You may try to disable this line https://github.com/intel/lpot/blob/master/lpot/adaptor/tf_utils/graph_rewriter/generic/pre_optimize.py#L124 and see if the issues gone or not. |
Signed-off-by: Deb Taylor <deb.taylor@intel.com> Reviewed-by: Feng Tian <feng.tian@intel.com> Co-authored-by: Deb Taylor <deb.taylor@intel.com>
No feedback over 2 weeks, closed at first. Please reopen if issue still there. |
Version: LPOT 1.5, Tensorflow 2.5, Intel-Tensorflow 2.5
Env: Google Colab
I was using a Keras saved model for quantization, and the following error occurs:
Also, I don't know why the system prints
[WARNING] There is no quantizable op type!!!
, because my model contains Conv2D operations and Matmul operations, which are clearly quantizable.The text was updated successfully, but these errors were encountered: