ENH - Run on grid stopping criterion #671
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds a
RunOnGridCriterion
so that solvers can be run over a grid of parameters that can represent anything: a number of iterations, a target accuracy, an hyperparameter value, etc... The solver is stopped either when the end of the grid is met or when the default triggers inStoppingCriterion.should_stop()
are activated. I've used such criterion for the sparse support recovery benchmark so I think this may be useful for other ones.Example of usage
Below is an example were a one runs a solver over a grid of two parameters
a
andb
, chosen withinnp.linspace(0, 0.1, 10)
andnp.linspace(1, 1000, 10)
, respectively.Caching and grid parameter
I faced a small issue while implementing my feature. The
grid
parameter in the__init__()
function must have a default value otherwise benchopt is not able to run the benchmark. It turns out that the stopping criterion is cached by benchopt and the resulting effect is that the defaultgrid
value defined in the classRunOnGridCriterion
is always the one used in therun
method of the solvers, even when a newstopping_criterion
with a differentgrid
value is defined in the solver class. @tomMoral said that this could be solved by overloading theget_runner_instance()
function but I'm not sure I have done that is the best way. Let me know if you think of a smarter way to do that !Checks before merging PR