New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
At Runtime : "Error while reading resource variable softmax/kernel from Container: localhost" #28287
Comments
Any updates on this? |
This is not Build/Installation or Bug/Performance issue. Please post this kind of support questions at Stackoverflow. There is a big community to support and learn from your questions. GitHub is mainly for addressing bugs in installation and performance. Thanks! |
I had the same issue in tensorflow 1.13.1 which I have resolved by creating a reference to the session that is used for loading the models and then to set it to be used by keras in each request. I.e. I have done the following: from tensorflow.python.keras.backend import set_session
from tensorflow.python.keras.models import load_model
tf_config = some_custom_config
sess = tf.Session(config=tf_config)
graph = tf.get_default_graph()
# IMPORTANT: models have to be loaded AFTER SETTING THE SESSION for keras!
# Otherwise, their weights will be unavailable in the threads after the session there has been set
set_session(sess)
model = load_model(...) and then in each request (i.e. in each thread): global sess
global graph
with graph.as_default():
set_session(sess)
model.predict(...) |
You are amazing!!!!! This is the best solution. Can you tell me why did it just work after adding session? |
Thank you and you are very welcome :). As far as I understand, the problem is that tensorflow graphs and sessions are not thread safe. So by default a new session (which does not contain any previously loaded weights, models a.s.o.) is created for each thread, i.e. for each request. By saving the global session that contains all your models and setting it to be used by keras in each thread the problem is solved. |
Closing this out since I understand it to be resolved, but please let me know if I'm mistaken. Thanks! |
I have the same issue with tensor flow version 1.13.1, the above solution works for me. |
I am having the same issues and was wondering what the value of some_custom_config was? |
In case you want to configure your session (which I had to do), you can pass the config in this parameter. Else just leave it out. |
Thank you so much! Everything is running perfectly now. |
Thanks for providing the codes. I ran into similar error message while running BERT on Kera. I tried your solution but can't seem to get it to work. Any guidance is most appreciated!
|
I have a similar error when using Elmo embeddings from tf-hub inside a custom keras layer.
|
Thank you so much for this. In my case I did it a bit differently, in case it helps anyone: # on thread 1
session = tf.Session(graph=tf.Graph())
with session.graph.as_default():
k.backend.set_session(session)
model = k.models.load_model(filepath)
# on thread 2
with session.graph.as_default():
k.backend.set_session(session)
model.predict(x, **kwargs) The novelty here is allowing for multiple models to be loaded (once) and used in multiple threads. |
@SungmanHong instead of |
import session in TF 2.X |
This worked for me , thanks |
Man you are genius and awesome , you just saved my project , Thank you so muuch |
I have applied the suggestion from tensorflow/tensorflow#28287 (comment) I'm not familiar with tensorflow, so I don't know if I've fixed the problem correctly. Now the trainer is no longer stuck in the waiting state.
I have used the given solution but didn't work for me still, I'm getting the error
|
Pls it's showing some_custom_config not defined |
Pls it's showing some_custom_config not defined |
work's for me, thanks!! |
Am I correct that the only way to solve this issue in TF2 is to disable eager execution? |
Hi, I am using tensorflow2.4.1 and getting similar error.. Below is my code & error. Can you help? Code pasted on stack overflow: |
Hi brother , I have same problem in this : can be solve it ? |
tf 2.5 please any solution ? |
thanks for your answer, Im having problems particularly, using tf 1.14. Anyone has an idea what it could be? |
from tensorflow.python.keras.backend import set_session tf_config = some_custom_config IMPORTANT: models have to be loaded AFTER SETTING THE SESSION for keras!Otherwise, their weights will be unavailable in the threads after the session there has been setset_session(sess) global sess |
System information
You can collect some of this information using our environment capture
python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"
v1.12.0-9492-g2c319fb415 2.0.0-alpha0
Describe the current behavior
when running "flaskApp.py", After loading the model and trying to classify an image using "predict", it fails with the error:
Describe the expected behavior
a result of image classification should be returned.
Code to reproduce the issue
Steps to reproduce:
git clone https://github.com/viaboxxsystems/deeplearning-showcase.git
git checkout tensorflow_2.0
pip3 install -r requirements.txt
export FLASK_APP=flaskApp.py
flask run
OR
Other info / logs
The text was updated successfully, but these errors were encountered: