Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restart dual_optimizer state when performing dual restarts #28

Open
juan43ramirez opened this issue Jun 7, 2022 · 3 comments
Open

Restart dual_optimizer state when performing dual restarts #28

juan43ramirez opened this issue Jun 7, 2022 · 3 comments
Assignees
Labels
enhancement New feature or request

Comments

@juan43ramirez
Copy link
Collaborator

Enhancement

When a dual restart is triggered, the dual variables are reset to their initial value 0.
Nonetheless, the state of the primal and dual optimizer remains the same. This may include running averages for momentum mechanisms.

These could be reset along with dual restarts when feasibility is achieved.

Motivation

This would represent a full reset of the optimization protocol when the constraint is being satisfied. Currently, the reset is "half baked" in the sense that only dual variables are reset.

References

Reset the state of a Pytorch optimizer: https://discuss.pytorch.org/t/reset-adaptive-optimizer-state/14654/5

@juan43ramirez juan43ramirez added the enhancement New feature or request label Jun 7, 2022
@gallego-posada
Copy link
Collaborator

This is a good idea.

This would for sure be problematic for the dual variables as the momentum "accumulation" during periods of feasibility might prevent the multiplier from moving in the right direction if the constraint becomes violated later.

Not sure whether this is as "problematic" for the primal optimizer. Maybe we could enable a flag to also reset the state of the primal optimizer upon dual restarts, but not force it.

@juan43ramirez juan43ramirez self-assigned this Aug 24, 2022
@juan43ramirez juan43ramirez changed the title Restart dual_optimizer (and perhaps primal optimizer) state when performing dual restarts Restart dual_optimizer state when performing dual restarts Aug 24, 2022
@juan43ramirez
Copy link
Collaborator Author

Perhaps we could maintain the primal optimizer's state.

What worries me is maintaining momentum "towards satisfying the constraints" that primal optimizers might have when reaching feasibility. Also, the running means have been accumulating possibly large values associated with $+ \lambda \nabla g$ (large $\lambda$ at satisfaction). These may mean biases in direction and aggressive decreases in magnitude for updates after restarts, which should only/mostly focus on the objective function.

That being said, (i) even if momentum and running means are slightly misleading, they have been computed (and will get updated) according to objective-heavy gradients and (ii) not sure if addressing these "issues" will have big practical implication.

@gallego-posada
Copy link
Collaborator

Modifying the state of the dual optimizers based on the feasibility of the constraint is challenging in general. It is manageable for optimizers like SGD with momentum, but could become very difficult for generic optimizers since the internal state might be"shared" across parameters. For example, an optimizer might keep track of the correlation in the gradient between different parameters.

The practical implications caused by this mis-alignment between the optimizer state and the reset value of the multiplier are unclear to me (and I guess they would depend on the type of optimizer).

For now I would suggest (1) simply performing the value reset, (2) leaving the optimizer state untouched and (3) documenting this pitfall explicitly in the Multiplier class.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants