Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: OptimizerChain #145

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open

feat: OptimizerChain #145

wants to merge 6 commits into from

Conversation

sumny
Copy link
Sponsor Member

@sumny sumny commented Jun 7, 2021

The idea of OptimizerChain is to allow for sequentially running multiple Optimizers with potential additional Terminators.
Credit to @mb706 who implemented this initially here.
Could also be used for running the same Optimizer with random restarts (as shown in the example).

@jakob-r
Copy link
Sponsor Member

jakob-r commented Jun 10, 2021

I like the idea conceptually but I don't think I would introduce this into the package as it nothing existencial to the topic of optimization. Also it would need more flexibility to be of practical use (eg. Determine the part of the instance the next optimizer is allowed to see).

It would be a nice gallery post, though.
Then you can publish the idea, have some code to copy paste in case needed without the burden of documentation and making it work like expected in every combination.

@mb706
Copy link
Contributor

mb706 commented Jun 10, 2021

I think it should be added.

R/OptimizerChain.R Outdated Show resolved Hide resolved
@pfistfl
Copy link
Sponsor Member

pfistfl commented Jun 10, 2021

I agree with @mb706, this is something we will very often need.

  • Restarts for focussearch, gensa, nlopt ... basically everything that can get stuck in local optima.
    This is not only relevant for tuning but we make use of those optimizers in mlr3pipelines, during acqfun optimization etc.
    I would go as far as maybe adding a global restarts argument to opt such that it automatically chains itself.
  • Just having an infinite version of e.g. hyperband: Simply re-start until the global termination criterion is reached
  • Combining e.g. portfolios with BO (as done in mlr3automl)

R/OptimizerChain.R Outdated Show resolved Hide resolved
@sumny sumny requested a review from mllg June 29, 2021 08:09
R/OptimizerChain.R Outdated Show resolved Hide resolved
@sumny sumny added this to Review in Workshop 2021 Sep 29, 2021
param_sets = vector(mode = "list", length = length(optimizers))
ids_taken = character(0L)
# for each optimizer check whether the id of the param_set
# (decuded from the optimizer class) is already taken;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo

private$.ids = map_chr(param_sets, "set_id")
super$initialize(
param_set = ParamSetCollection$new(param_sets),
param_classes = Reduce(intersect, mlr3misc::map(optimizers, "param_classes")),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think mlr3misc:: is necessary?

.optimize = function(inst) {
terminator = inst$terminator
on.exit({inst$terminator = terminator})
inner_inst = inst$clone(deep = TRUE)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be good if this could be avoided, so when an error happens in the inner optimization then the user's inst at least has the partial optimization progress. Did I do this in my original implementation?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We specifically do the on.exit() thing so we can modify the inst's $terminator temporarily without needing to clone, or did something get uncovered that does make this necessary?

@be-marc be-marc changed the title OptimizerChain feat: OptimizerChain Jan 24, 2022
@sumny
Copy link
Sponsor Member Author

sumny commented Feb 11, 2022

Clearly distinguish between running optimizers in a chain (without sharing info) vs. chaining optimizers that can rely on the archive data of the previous optimizer runs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

Successfully merging this pull request may close these issues.

None yet

5 participants