Skip to content

salimandre/Metaheuristic-TSP-LSCO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Metaheuristic Tasks

We ran our py scripts in a virtual environment whose list of packages can be found in py folder as well as our python scripts. Code is straightforward to use although for tsp it is required to have our own folders dj38 and qa194 with json, jpg and txt files.

Large Scale Continuous Optimization

As we wanted to see how well perform Particle Swarm Optimization on different objective functions and in different dimension settings we only used this one algorithm. We used the library Pyswarms for PSO evaluations. Pyswarms was able to handle boundary constraints although we think there were more powerful available implementations that we missed.

For dim 50 we used a budget of 250K evaluations and for dim 500 a budget of 1.25M evaluations and computed 25 runs for every functions/dim. We chose our cognitive, social and inertia parameters by performing a random search among 100 settings and each setting was evaluated 10 times on the Shifted Sphere function in dim 50. We kept these same parameters for every other computations.

In dim 50 we tried two settings 1000 particles with 250 iterations and 5000 particles with 50 iterations. The latter proved to be better on Sphere and Rosenbrock functions. In dim 500 we tried the following two settings: 5000 particles with 250 iterations and 10000 particles with 100 iterations but they provided similar results.

We could observe the curse of dimensionality. For Rosenbrock and Sphere the domain is [-100, 100]^D whose volume increases exponentially with the dimension plus the sphere function tends to become sharper and sharper. Hence for some of the functions and domains it requires more and more ressources as the dimension increases to achieve good results. Results could have been improved by taking more particles but it is expensive in RAM and leads to large sized matrix computations. The library we used was not suited for that.

Code CEC’08 Name Dim Bounds Min. Value Algo Particles Iter. Evals Stop. Criteria Cognitive Social Inertia Comput. Time (s) # runs Mean Value Median Value Best Value Plot
F1 Shifted Sphere 50 [−100,100] -450 PSO 5000 50 250K n_evals 0.4 0.3 0.8 1.9 25 5690.8 5540.3 2512.7
F2 Shifted Schwefel 50 [−100,100] -450 PSO 1000 250 250K n_evals 0.4 0.3 0.8 3.8 25 -399.9 -400.3 -407.2
F3 Shifted Rosenbrock 50 [−100,100] 390 PSO 5000 50 250K n_evals 0.4 0.3 0.8 6.7 25 28753075.4 19607298.2 1771071.4
F4 Shifted Rastrigin 50 [−5,5] -330 PSO 1000 250 250K n_evals 0.4 0.3 0.8 8.7 25 -3.6 1.3 -87.4
F5 Shifted Griewank 50 [−600, 600] -180 PSO 1000 250 250K n_evals 0.4 0.3 0.8 9.8 25 -106.9 -115.9 -143.8
F6 Shifted Ackley 50 [−32,32] -140 PSO 1000 250 250K n_evals 0.4 0.3 0.8 5.7 25 -126.2 -126.2 -130.4
Code CEC’08 Name Dim Bounds Min. Value Algo Particles Iter. Evals Stop. Criteria Cognitive Social Inertia Comput. Time (s) # runs Mean Value Median Value Best Value Plot
F1 Shifted Sphere 500 [−100,100] -450 PSO 5000 250 1.25M n_evals 0.4 0.3 0.8 43.8 25 1.1e6 1.1e6 9.5e5
F2 Shifted Schwefel 500 [−100,100] -450 PSO 5000 250 1.25M n_evals 0.4 0.3 0.8 44.0 25 -345.2 -345.4 -349.7
F3 Shifted Rosenbrock 500 [−100,100] 390 PSO 5000 250 1.25M n_evals 0.4 0.3 0.8 109.3 25 297e9 297e9 267e9
F4 Shifted Rastrigin 500 [−5,5] -330 PSO 5000 250 1.25M n_evals 0.4 0.3 0.8 56.3 25 6291.7 6291.7 5791.9
F5 Shifted Griewank 500 [−600, 600] -180 PSO 5000 250 1.25M n_evals 0.4 0.3 0.8 21.5 25 8509.6 8456.2 8105.1
F6 Shifted Ackley 500 [−32,32] -140 PSO 5000 250 1.25M n_evals 0.4 0.3 0.8 64.0 25 -119.6 -119.6 -119.6

Discrete Optimization: TSP

During the course we had already implemented our homemade version of genetic algorithm and applied it to TSP (https://github.com/salimandre/metaheuristics). This time for a change we chose to go for Local Search Algorithms namely 2-swap, 3-swap, 2-opt and a Greedy Algorithm. Here we implemented everything ourself.

In k-swap at each step we perform a permutation of k cities in the path and we keep it if it decreases the total distance. In k-opt at each step we remove k edges from the path and we add k new edges that we keep if it decreases the total distance. We used also a greedy policy as a baseline and as an initialization. We connect each city with the closest city.

2-opt with greedy policy proved to provide convincing results and in a short amount of time. In order to improve results we could have added some technique such as Simulated Annealing to avoid being stuck too quickly in local minima. Also if we had to deal with larger sized TSP we could have used Kdtree in order to make computations more efficient.

dj38

Algo Iters Stop. Criteria Comput. Time (s) # runs Mean Value Best Value Plot
Greedy 700 when Hamiltonian 0.012 10 6771.5 6771.5
2-swap 100K n_iters or when all 2-swap neighbors of graph have been visited 0.24 10 9978.8 8111.6
2-swap + Greedy 100K n_iters or when all 2-swap neighbors of graph have been visited 0.11 10 6723.4 6664.1
3-swap 100K n_iters or when all 3-swap neighbors of graph have been visited 6.75 10 9349.3 7937.7
3-swap + Greedy 100K n_iters or when all 3-swap neighbors of graph have been visited 4.12 10 6659.4 6659.4
2-opt 10K n_iters or when all 2-opt neighbors of graph have been visited 0.1 10 7353.0 6659.4
2-opt + Greedy 10K n_iters or when all 2-opt neighbors of graph have been visited 0.01 10 6694.6 6664.1

respectively 2-opt and 3-swap with random initialization applied to dj38

qa194

Algo Iters Stop. Criteria Comput. Time (s) # runs Mean Value Best Value Plot
Greedy 20K when Hamiltonian 0.36 10 12346.9 12346.9
2-swap 250K n_iters or when all 2-swap neighbors of graph have been visited 39.7 10 20028.8 18409.6
2-swap + Greedy 250K n_iters or when all 2-swap neighbors of graph have been visited 8.1 10 11515.8 11318.5
2-opt 250K n_iters or when all 2-opt neighbors of graph have been visited 32.2 10 10520.8 10108.0
2-opt + Greedy 250K n_iters or when all 2-opt neighbors of graph have been visited 11.9 10 10078.4 9931.1

2-opt with greedy initialization applied to qa194

About

Discrete and continuous optimization problems solved iteratively and approximately by metaheuritic algorithms.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages