Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I export a gpu-trained model to a cpu-only machine for serving/evaluation? #3343

Closed
3rduncle opened this issue Jul 16, 2016 · 3 comments
Closed
Labels
stat:awaiting tensorflower Status - Awaiting response from tensorflower

Comments

@3rduncle
Copy link

I trained a model on a GPU machine.
When l load such model on a cpu-only machine, I get the error below.

Thank you for any suggestions!

@jmchen-g
Copy link
Contributor

@vrv Could you take a look at this please? Do we have such support now? Thanks.

@jmchen-g jmchen-g added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jul 18, 2016
@vrv
Copy link

vrv commented Jul 18, 2016

Yup, you can pass tf.ConfigProto(allow_soft_placement=True) to the tf.Session if you really want to ignore device placement directives in the graph. I'm not a huge fan of using this option (I'd rather the graph be rewritten to strip out the device fields explicitly), but that should work.

@vrv vrv closed this as completed Jul 18, 2016
@vrv
Copy link

vrv commented Jul 18, 2016

Also, in the future, StackOverflow is the right place to ask these questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:awaiting tensorflower Status - Awaiting response from tensorflower
Projects
None yet
Development

No branches or pull requests

3 participants