Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Saving optimizer state (adagrad/momentum/etc.) #5595

Closed
rmeertens opened this issue Nov 14, 2016 · 1 comment
Closed

Saving optimizer state (adagrad/momentum/etc.) #5595

rmeertens opened this issue Nov 14, 2016 · 1 comment

Comments

@rmeertens
Copy link

Hey everybody,

Last week I asked this question on stackoverflow: https://stackoverflow.com/questions/40547198/saving-the-state-of-the-adagrad-algorithm-in-tensorflow .
My problem is that I want to save the state of the optimizer (in my case the adagrad accumulators) so I can stop my learning and continue whenever I want.

Unless I'm mistaken the state of the optimizer can't be saved (you cant pass an optimizer to a tf.train.Saver, right?). A quick (hacky?) solution for me might be is calling Optimizer.get_slot_names() and save the op of each slot.
The next problem would be putting this op back in the slots, as I don't think there is a set_slot(name,op) at the moment.

So my questions are:

  • Am I right that this is currently impossible?
  • Do we want to have a set_slot(name,op) function in the Optimizer class? (I am willing to help out with this)
  • Do we want to be able to pass an optimizer to a Saver object?
@girving
Copy link
Contributor

girving commented Nov 14, 2016

Thank you for asking the question on stackoverflow, which is a better place for it. The optimizer state will be saved by default, and is only not saved because you are specifically telling the saver what to save.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants