Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ALNS parameter optimization #170

Open
dietmarwo opened this issue Feb 20, 2024 · 9 comments
Open

ALNS parameter optimization #170

dietmarwo opened this issue Feb 20, 2024 · 9 comments
Labels
documentation Improvements or additions to documentation enhancement New feature or request

Comments

@dietmarwo
Copy link

Is your feature request related to a problem? Please describe

Checking out the examples, for some of them (TSP, CVRP) I found that using decay = 0.85 improved the results significantly compared to the choosen value decay = 0.8. A useful feature would be an automated support for parameter testing.

Describe the solution you'd like

The solution would be an automated parameter swipe executing many optimization runs thereby changing one or more parameters
and the random seed. To utilize modern many-core CPUs these runs could be parallelized using Python multi-processing.
Result would be mean and standard deviation of the objective for specific parameter values which could be visualized using plots.

Describe alternatives you have considered

Alternative would be some guidance how to manually optimize the parameters - may be another example or based on an existing one.
Permutation Flow Shop already contains a section "Tuning the destroy rate". There could be a similar section for TSP called "Tuning the decay rate".

Additional context

@N-Wouda
Copy link
Owner

N-Wouda commented Feb 20, 2024

Hi @dietmarwo! 👋 How does this compare to the ideas laid out in #109? I think we can combine things, but I'm hesitant to build a custom tuner just for ALNS. It might be easier to hook into existing tuning software instead, but that needs to be explored further.

@dietmarwo
Copy link
Author

dietmarwo commented Feb 20, 2024

Not sure whether ML hyperperameter tuning is optimial here. In ML we usually use dedicated hardware (TPUs/GPUs) which blocks parallel evaluation of hyperparameter settings. ALNS is often applied to objective functions which can be evaluated single threaded, so on a modern CPU you could run >= 16 experiments in parallel. But nevertheless, you are right, applying these existing tools is better than nothing. But there are alternatives avoiding the creation of a custom tuner:

  • Start with a simple example section similar to "Tuning the destroy rate"
  • You could apply a parallel optimizer not specific to hyperparameters, but being able to evaluate an objective Python function in parallel - see for instance https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/Sweep.adoc .
    In your case this "objective Python function" would be alns.iterate for specific meta-parameters choosen by the (meta-)optimization algorithm. With a time restriction of 60 seconds, on a 16-core machine you could execute 60x32=1920 runs per hour. So even when optimizing multiple meta-parameters at once, you could expect good results after some hours.

keras-tuner looks very ML specific to me, but
ray.tune may be indeed a valid choice, even supporting parallel parameter testing - even utilizing more than one CPU in a cluster.

@N-Wouda
Copy link
Owner

N-Wouda commented Feb 20, 2024

I have had some success applying SMAC3 to tune an ALNS before - some example code for setting that up is available here, in src/tune.py. That's now a few years old so it might no longer work, but should not be too hard to adapt if needed.

Alternatively:

  • We could add more to the docs about how to run these experiments using a hand-rolled tuner (like we did for the flow shop example). This'd have to be fast, showing just the general ideas but not performing a complete, many-hours-long optimisation run.
  • Find some other tuner? I have used SMAC for algorithm configuration, and also briefly looked at irace and ParamILS in the past. There might be alternatives that suit ALNS better: this survey seems like a promising start to finding those.

@N-Wouda N-Wouda added enhancement New feature or request documentation Improvements or additions to documentation labels Feb 20, 2024
@dietmarwo
Copy link
Author

Indeed, in SMAC3 "Scenario" supports the "n_jobs" parameter where you also can execute multiple runs in parallel. It may indeed be a good solution. Will also have a look at this survey .

@N-Wouda
Copy link
Owner

N-Wouda commented Feb 20, 2024

Great, let me know what you find! This could be a cool addition to ALNS.

@dietmarwo
Copy link
Author

Related to the parallelization topic:
I am currently thinking how to add parallelization to ALNS itself. For instance I could implement a "parallel_repair" operator which applies several "repairs" in parallel and submits only the one giving the best objective value. But this would probably be inferior to real parallelism implemented into the algorithm itself: Parallel destroys/repairs would use all repair results, not just the best one. Do you have any thoughts / plans about this?

@N-Wouda
Copy link
Owner

N-Wouda commented Feb 20, 2024

We used to have an issue open about just that, here: #59. I think this is the most basic parallel ALNS paper out there: PALNS. Whether that's the best way to do it is an open question. There are some papers from the past few years that describe parallel ALNS implementations, but I've not seen any that really convinced me they are significantly better than the obvious parallellisation of doing $N$ independent ALNS runs in parallel, and returning the best solution out of those. I don't think I can help you much in this direction.

@dietmarwo
Copy link
Author

dietmarwo commented Feb 20, 2024

obvious parallellisation of doing independent ALNS runs in parallel

With evolutionary optimization - which I am more familiar with - we are faced with the same problem. Sometimes the "parallel runs" idea works very well, but sometimes it is better to parallelize the evaluation of a population. Population based EAs can support this by providing an ask/tell interface - then the user can apply parallel evaluation himself. Question is if ALNS can support a similar idea: Asking the user to apply a list of destroy/repair pairs and return the resulting states. Advantage is that ALNS doesn't have to perform any parallelization itself. So you could "do just ALNS really well" as planned, but still support parallelization. The only internal change would be that you would no longer update the internal state of the algorithms (its weights) for each destroy/restore pair separately, but would perform a kind of "batch update" for a list of destroy/restore pairs.

@N-Wouda
Copy link
Owner

N-Wouda commented Feb 21, 2024

I think something like that should be possible, but I don't have the time right now to mock something up. It might be tricky to do this without breaking existing interfaces. Would you be willing to have a go at this if you're interested?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants