Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-Optimization Probing Causes Intensifier to Error Out #1094

Open
Shakesbeery opened this issue Feb 8, 2024 · 1 comment
Open

Pre-Optimization Probing Causes Intensifier to Error Out #1094

Shakesbeery opened this issue Feb 8, 2024 · 1 comment

Comments

@Shakesbeery
Copy link

Description

NOTE: This may be related to #1088 and #1086, but if so the triggering mechanism is slightly different. Fixing one may fix the others though, or at least provide a temporary patch.

In various use cases I have a lot of prior experimentation data that can be used to warm start the BO process. I do this by starting with a series of optimizer.tell() calls and provide the configurations+outcomes of the previous experiments. Once the initial configuration probing is done, I resume an ask-and-tell workflow. Unfortunately, when attempting to call optimizer.ask(), the following error occurs:

[INFO][abstract_intensifier.py:287] Added existing seed 0 from runhistory to the intensifier.
[ERROR][intensifier.py:134] Intensifier could not find any new trials.

StopIteration                             Traceback (most recent call last)
Cell In[15], line 48
     45     results = TrialValue(cost=val)
     46     optimizer.tell(info, results)
---> 48 optimizer.ask()

File ~\anaconda3\envs\badass\lib\site-packages\smac\facade\abstract_facade.py:276, in AbstractFacade.ask(self)
    274 def ask(self) -> TrialInfo:
    275     """Asks the intensifier for the next trial."""
--> 276     return self._optimizer.ask()

File ~\anaconda3\envs\badass\lib\site-packages\smac\main\smbo.py:153, in SMBO.ask(self)
    150     callback.on_ask_start(self)
    152 # Now we use our generator to get the next trial info
--> 153 trial_info = next(self._trial_generator)
    155 # Track the fact that the trial was returned
    156 # This is really important because otherwise the intensifier would most likly sample the same trial again
    157 self._runhistory.add_running_trial(trial_info)

StopIteration: 

Steps/Code to Reproduce

The following is a bit contrived because I can't share my exact code, but it still reproduces the same error.

import json
import numpy as np
from smac import Scenario
from smac.runhistory import TrialInfo, TrialValue
from smac import HyperparameterOptimizationFacade
from ConfigSpace import Categorical, Float, Integer
from ConfigSpace import Configuration, ConfigurationSpace

lookup = {"Bill": 10, "Bob": 5, "Sally": 0}

def wonky_quad(x0, x1, name):
    term = lookup[name]
    return (x0**2 + x1) - term

def dummy(cost, seed=0):
    return cost

config_dict = {
                "n_trials": 500,
                "name": "SampleError",
                "seed": 0
            }


config_space = ConfigurationSpace()
config_space.add_hyperparameter(Categorical(name="name", items=["Bill", "Bob", "Sally"]))
config_space.add_hyperparameter(Float(name="x0", bounds=[-10, 10]))  
config_space.add_hyperparameter(Float(name="x1", bounds=[-10, 10]))

scenario = Scenario(config_space, **config_dict)

optimizer = HyperparameterOptimizationFacade
acq = optimizer.get_acquisition_function(scenario, xi=0.1)
intensifier = optimizer.get_intensifier(scenario, max_config_calls=1)
optimizer = optimizer(scenario, dummy, acquisition_function=acq,
                                intensifier=intensifier, overwrite=True)

for nums in np.random.randint(-10, 11, (18, 2)):
    x0, x1 = nums.astype(np.float64)
    name = np.random.choice(["Bill", "Bob", "Sally"])
    val = wonky_quad(x0, x1, name)
    values = {"x0": x0, "x1": x1, "name": name}
    current_config = Configuration(config_space, values=values)
    info = TrialInfo(config=current_config, seed=0)
    results = TrialValue(cost=val)
    optimizer.tell(info, results)
    
optimizer.ask()

Expected Results

The expectation is that when calling optimizer.ask() it provides a trial info object such as:

TrialInfo(config=Configuration(values={
  'name': 'Sally',
  'x0': 0.7684221677482128,
  'x1': -5.670902710407972,
})
, instance=None, seed=0, budget=None)

Actual Results

We get the abovementioned error (with a little more context):

[INFO][abstract_initial_design.py:147] Using 30 initial design configurations and 0 additional configurations.
[INFO][abstract_intensifier.py:515] Added config 437e9d as new incumbent because there are no incumbents yet.
[INFO][abstract_intensifier.py:590] Added config 1e91ec and rejected config 437e9d as incumbent because it is not better than the incumbents on 1 instances:
[INFO][configspace.py:175] --- name: 'Bill' -> 'Bob'
[INFO][configspace.py:175] --- x0: 4.0 -> 2.0
[INFO][configspace.py:175] --- x1: -9.0 -> -10.0
[INFO][intensifier.py:126] Added config 437e9d from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config a76f0f from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config ce98c7 from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config 04306f from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config 008689 from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config db4ced from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config df0103 from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config 67c402 from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config 38677d from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config a197a6 from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config 217888 from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config 89a24d from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config 4a0661 from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config 1f7ee7 from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config 400a2c from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config 7140ea from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config f8f3c0 from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config c79379 from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config d225cb from runhistory to the intensifier queue.
[INFO][intensifier.py:126] Added config 1e91ec from runhistory to the intensifier queue.
[INFO][abstract_intensifier.py:287] Added existing seed 0 from runhistory to the intensifier.
[ERROR][intensifier.py:134] Intensifier could not find any new trials.
---------------------------------------------------------------------------
StopIteration                             Traceback (most recent call last)
Cell In[17], line 48
     45     results = TrialValue(cost=val)
     46     optimizer.tell(info, results)
---> 48 optimizer.ask()

File ~\anaconda3\envs\badass\lib\site-packages\smac\facade\abstract_facade.py:276, in AbstractFacade.ask(self)
    274 def ask(self) -> TrialInfo:
    275     """Asks the intensifier for the next trial."""
--> 276     return self._optimizer.ask()

File ~\anaconda3\envs\badass\lib\site-packages\smac\main\smbo.py:153, in SMBO.ask(self)
    150     callback.on_ask_start(self)
    152 # Now we use our generator to get the next trial info
--> 153 trial_info = next(self._trial_generator)
    155 # Track the fact that the trial was returned
    156 # This is really important because otherwise the intensifier would most likly sample the same trial again
    157 self._runhistory.add_running_trial(trial_info)

StopIteration: 

Solutions?

The root of the problem I think has to do with the internal mechanics of the tell() function. It's designed to be used in the typical ask-and-tell workflow and wasn't designed for initial configuration space probing. As such, smbo.tell() adds to the runhistory but they've never been intensified or been part of a running trial. This causes the intensifier._queue size to suddenly balloon in the intensifier.__iter__ function here:

if len(self._queue) == 0:
  for config in rh.get_configs():
      hash = get_config_hash(config)
      self._queue.append((config, 1))
      logger.info(f"Added config {hash} from runhistory to the intensifier queue.")

Then the intensifier enters the loop:

fails = -1
while True:
    fails += 1

Here's where the problem starts. fails begins to be incremented inside of the while loop and we always enter the else portion of the iterator (line 236 in my version) to find a challenger:

else:
    logger.debug("Start finding a new challenger in the queue:")
    for i, (config, N) in enumerate(self._queue.copy()):
        config_hash = get_config_hash(config)

Because the loop always breaks at line 296, we start back at the top of the while loop and fails += 1. At some point, arbitrarily, intensifier._retries was set to 16. This means that because fails=-1, we have 17 chances to probe the configuration space before we accidentally trigger the retry failure. This is shown by changing my sample code np.random.randint(-10, 11, (18, 2)) -> np.random.randint(-10, 11, (17, 2)) and it now works without issue.

Since it doesn't make sense to truncate previous knowledge before optimizing, I've bypassed this problem by setting optimizer.intensifier._retries = len(payload["previous_trials"]) + 1 before I call optimizer.ask(). This allows the intensifier to run its loop without triggering the retry failure.

I can also use optimizer.intensifier.reset() after the series of optimizer.tell() calls, but I don't know what else that may affect, so I avoid it.

There is no way currently to pass retries to the optimizer.get_intensifier() function call which complicates dynamically setting that limit at configuration time. Exposing that at a higher level might at least alleviate the issue for some users.

Probably the more durable solution is to be more specific about what a failure is in the intensification loop. Right now every iteration is considered a failure, but that's not necessarily true. What was the original intent of checking if fails > self._retries and how do we avoid that colliding with configuration space probing prior to optimization?

Versions

'2.0.1'

@benjamc
Copy link
Contributor

benjamc commented Feb 13, 2024

Hi Shakesbeery,

unfortunately I was not able to reproduce your error.

Originally, "fails" was introduced as previously tried configurations might be sampled again, but you don't want to return them and instead sample a new one. In our experiments the retries are mostly low, but with longer runs it is possible to get a higher amount of retries.

Also, in your example the initial design is still called. You can surpass it by passing initial_design = optimizer.get_initial_design(scenario=scenario, n_configs=0) to the facade. Does this maybe help?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants