Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scalability of GP #86

Open
rmrmg opened this issue Mar 1, 2024 · 5 comments
Open

Scalability of GP #86

rmrmg opened this issue Mar 1, 2024 · 5 comments

Comments

@rmrmg
Copy link

rmrmg commented Mar 1, 2024

I have such problem with openbox optimization with prf works fast but quality of results is lower compore to gp. Unfortunatelly gp becomes very slow after several hundreds of points. There are some procedures to reduce complexity of gp (e.g. https://proceedings.neurips.cc/paper_files/paper/2019/file/01ce84968c6969bdd5d51c5eeaa3946a-Paper.pdf )
Is there any of such sort in openbox?
If not how to do long opt effectively? I have following idea:
a) perform long optimization with prf, feed history with those results and run gp but still each gp step will be very slow
b) optimization in sub-spaces, divided whole space into N non-overlapping parts and perform N "independent" sub-optimizations.
The main optimization process will perform step in subspace where value of acquisition function is the highest.
What you think about b)? Do you have any better idea?
If you think b) is worth to try how to do this with openbox? I can based on https://open-box.readthedocs.io/en/latest/examples/ask_and_tell.html but is there anything like advisor.get_best_value_of_acquisition_function() ?

@jhj0411jhj
Copy link
Member

Hi @rmrmg, there is an auto-switch mechanism in openbox currently. If you set surrogate_model='auto' and the model is decided to be 'gp', after 300 iterations, the model will be switched to 'prf' automatically. We may consider implementing the algorithm in you reference paper in the future.

For (b), you can take the following codes as reference:

import numpy as np
from openbox import Advisor, History, Observation

advisor = Advisor(...)
history = advisor.get_history()
for i in range(100):
    # split history
    history1 = History(...)
    history2 = History(...)
    history1.update_observations(history.observations[::2])
    history2.update_observations(history.observations[1::2])
    # get suggestion
    config1 = advisor.get_suggestion(history=history1)
    config2 = advisor.get_suggestion(history=history2)
    # compute acq value
    all_config = [config1, config2]
    acq_value = advisor.acquisition_function(all_config)
    # evaluate
    next_config = all_config[np.argmax(acq_value)]
    y = obj_func(next_config)
    # update history
    observation = Observation(config=next_config, objectives=[y])
    history.update_observation(observation)

The results may be different in different problems. If you want to optimize for 1000-10000 iterations, you can also consider using Evolutionary Algorithms (see openbox.core.ea_advisor and openbox.core.ea).

For developing using openbox, this docs can be helpful:

@rmrmg
Copy link
Author

rmrmg commented Mar 5, 2024

Hi @jhj0411jhj Thx for answer. Do you have any example tuorial for ea?

@jhj0411jhj
Copy link
Member

That part is in dev. We will update the docs in the future.

@rmrmg
Copy link
Author

rmrmg commented Mar 6, 2024

@jhj0411jhj I found https://github.com/LLNL/MuyGPyS
what you think about integrating this with openbox?

@jhj0411jhj
Copy link
Member

Thanks for suggestion. We will take a look, but it may take some time due to the shortage of manpower.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants