Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

document/develop more ways to control exploration-exploitation tradeoff #450

Open
zkurtz opened this issue Nov 28, 2018 · 2 comments
Open

Comments

@zkurtz
Copy link

zkurtz commented Nov 28, 2018

Here are ways that I see mlrMBO currently offering control over exploration vs exploitation for single-objective tuning:

  • The infill criterion offers a discrete set of choices, each of which implies a particular tradeoff
  • Specifically the cb.lambda parameter offers pretty direct control for the lower confidence bound criterion as in equation (2)
  • setMBOControlInfill(... interleave.random.points=?) offers a way to inject any approach with some amount of 'pure exploration'.

What other controls exist? Here are some I'd like:

  1. Extend the definition of makeMBOInfillCritEI to accept the cb.lambda parameter too, as a coefficient on the variance term (why not?)
  2. Offer control over the Gaussian process prior of the learner to allow setting a high variance on the prior.
  3. Offer control over the bandwidth of the Gaussian process covariance kernel to be more-or-less permissive of wiggly loss
  4. In case the learner is a random forest, offer controls analogous to (2) and (3).
@jakob-r
Copy link
Sponsor Member

jakob-r commented Nov 29, 2018

Hi @zkurtz,
thanks for your input. I'd like to add:

  • Recently I added the possibility to implement adaptive (adapting to the process of the optimization) infill criteria. This is an experimental feature and allows you to set certain parameters depending on the progress of the optimization. A concrete example is the Adaptive CB. In one paper it was suggested that it might be beneficial to start with a big value for lambda and go to a smaller one. This is be possible with this feature.

Regarding your suggestions:

  1. Do you have any reference that says that this is a good idea? I stumbled upon the espsilon value here but I have not found the reference yet.

  2. Do you mean the nugget setting? You can already set that when you define the learner manually. lrn = makeLearner("regr.km", nugget = 0.5)

  3. You can also configure the kernel directly using mlr (see above)

  4. Again, this should be all learner settings.

@zkurtz
Copy link
Author

zkurtz commented Nov 29, 2018

+1 for the adaptive CB feature.

(1) I don't have a reference.
(2) Yes nugget looks like the thing to start with.

More generally with (2)-(4) I'm not surprised to hear that these are learner settings. Adding a vignette that highlights how to use these settings to influence the exploration-exploitation trade off for the two default learners would be going above-and-beyond, but I imagine it would be very useful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants