-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement gp_hedge
acquisition function
#439
Comments
Hi @pfebrer, this would seem like a good addition to the project, in my opinion. However, compared to other acquisition functions, this one seems to be non-trivial to integrate into the existing codebase, primarily because the process is stateful through the
I currently don't have the bandwidth to implement this either. You could give it a try and see if you can work around these issues in a clever way. PS: maybe I misread the paper -- I really just scanned the pseudocode on p.6 -- and this is not actually an issue. |
Indeed that's the scary part about it, the acquisition function needs to keep a state. But I don't know if it's really incompatible with the current code structure. You already define aquisition functions by initializing a python object, e.g.: utility = UtilityFunction() So that object could keep a state that it's updated when you pass the utility function to the |
It's true that it could be stored in the Good point about suggest though, that should work without issue. |
Hmm to be perfectly correct yes, but I don't think there would be a major problem with starting the utility from scratch, would there be? |
If you restore the optimizer from a saved state (and you set the seeds to the same value), it should proceed as the uninterrupted optimization would, so I think this is a problem. You don't have reproducibility anymore. It would also make the optimization worse -- the point of I really like this acquisition function, and I think it would be a great addition to the package but I don't think we can add it without some larger restructuring of the I would encourage you to try and implement it for use in suggest-evaluate-register (see here). If you can figure it out, let me know :) |
Yes, ideally it should restart from the state of the gp_hedge, but I mean it's not a huge problem in practical terms if you just want to optimize a function. It might take a bit more time to relearn which utility function is better, but maybe it won't be that bad. I don't know if I will have time to do it, because I'm just benchmarking different optimizers for my particular problem. Of course it is always nice to contribute to things, but I'm more of a user in this case 😅 |
Understandable. If I ever find the time, I would try to implement this, so feel free to leave the isssue open for now :) |
@pfebrer I have redesigned the acquisition function API and implemented GPHedge. Would you be interested in reviewing and/or testing that code? |
Cool! I am not focused any more on the problem that brought me here, but I am sure I can find time to do some testing and benchmarking. Is it on the master branch already? |
I'll push it to a separate branch when it's ready and ping you, if that works :) |
Is your feature request related to a problem? Please describe.
I'm trying to migrate from
scikit-optimize
to this package because it seems more actively maintained. However, inscikit-optimize
the default acquisition function wasgp_hedge
, a combination of the three acquisition functions implemented in this package, which is not implemented in this package.Describe the solution you'd like
Would it be possible to add
gp_hedge
utility function to this package?References or alternative approaches
The methodology is described in https://arxiv.org/pdf/1009.5419.pdf (Algorithm 2 in page 6).
And it is implemented in
scikit-optimize
loop here:https://github.com/scikit-optimize/scikit-optimize/blob/a2369ddbc332d16d8ff173b12404b03fea472492/skopt/optimizer/optimizer.py#L538-L539
and here:
https://github.com/scikit-optimize/scikit-optimize/blob/a2369ddbc332d16d8ff173b12404b03fea472492/skopt/optimizer/optimizer.py#L596-L602
Additional context
I'm new to this field, so I might be missing some points like gp_hedge not being the optimal technique or being described initially in another paper.
Are you able and willing to implement this feature yourself and open a pull request?
The text was updated successfully, but these errors were encountered: