Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handling if init.design generates constant performances #439

Open
pfistfl opened this issue Sep 3, 2018 · 4 comments
Open

Handling if init.design generates constant performances #439

pfistfl opened this issue Sep 3, 2018 · 4 comments

Comments

@pfistfl
Copy link
Sponsor Member

pfistfl commented Sep 3, 2018

As per Bernd's request, here is a reproducible example:

Problem: What happens if we get constant values for the performance measure in the initial design:

library(mlr)
library(OpenML)

tsk = getOMLTask(3903)
lrn = setPredictType(makeLearner("classif.rpart"), "prob")
# Create mlr task with estimation procedure and evaluation measure
z = OpenML::convertOMLTaskToMlr(tsk, measures = auc)
set.seed(201)

lrn.ps =  makeParamSet(
  makeNumericParam("cp", lower = 0, upper = 1, default = 0.01),
  makeIntegerParam("maxdepth", lower = 1, upper = 30, default = 30),
  makeIntegerParam("minbucket", lower = 1, upper = 60, default = 1),
  makeIntegerParam("minsplit", lower = 1, upper = 60, default = 20)
)

inner.rdesc = makeResampleDesc("CV", iters = 5)
init.design = data.frame(cp = c(0.729248355375603, 0.892083862796426, 0.441975111840293,  0.817802196601406,
                                0.564511572476476, 0.463207047665492, 0.554985330672935,  0.443818159168586),
                             maxdepth = c(27L, 18L, 18L, 6L, 17L, 17L, 1L, 13L),
                             minbucket = c(30L, 30L, 26L, 49L, 20L, 20L, 56L, 27L),
                             minsplit = c(18L, 60L, 20L, 39L, 34L, 52L, 14L, 29L))
tune.ctrl = makeTuneControlMBO(same.resampling.instance = TRUE, budget = 10, mbo.design = init.design)
lrn.tune = makeTuneWrapper(lrn, inner.rdesc, mlr::auc, par.set = lrn.ps, tune.ctrl)
# Create OMLRun
bmr = mlr::benchmark(lrn.tune, z$mlr.task, z$mlr.rin, measures = z$mlr.measures)
@berndbischl
Copy link
Sponsor Member

hi flo,

thanks fopr the issue. in general not really minimal :)
you should rather produce something without mlr / benchmark, where the MBO fitness function is defined manually, not implictly by mlr. and it would in this case also be better to cut OML out.
just create a design with constant fitness values.

at us / developers:
we should probably change the setting of the surroate learner, in MBO, especially for our own defaults

@jakob-r
Copy link
Sponsor Member

jakob-r commented Sep 16, 2018

@berndbischl Change it to what?

@pfistfl Short answer: A random proposal should be the result. Do you have indications that this is not the case?

Ergo: Just add a warning?

@pfistfl
Copy link
Sponsor Member Author

pfistfl commented Nov 19, 2018

After few hours of debugging:

  ctrl = makeTuneControlMBO(budget = 100L)
  ctrl$mbo.control$on.surrogate.error = "warn"

seems to have the desired effect.

So as a result:

  • on.surrogate.error is doc'ed in makeMBOControl, but not on the Error Handling Page
  • mlr's on.learner.error does not have any effect, as the on.surrogate.error overwrites it.

I think / the issue was, that

  1. The error handling page should be somewhere more prominent, maybe even in the
    makeMBOCotrol() docs.
  2. We might wanna add on.surrogate.error to the tutorial

@ja-thomas
Copy link
Contributor

ja-thomas commented Nov 19, 2018

mbo.ctrl = makeMBOControl(on.surrogate.error = "warn")
ctrl = makeTuneControlMBO(budget = 100L, mbo.control = mbo.ctrl)

would be the correct approach.

The tuneControlMBO stuff is really confusing unfortunately since we have nested control object that have very similar names...

and settings of the inner control object (MBOControl) some can be set from the outer control onject (TuneControlMBO)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants