Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Consecutive calls to PSO is not necessarily the same of running with multiple generations #486

Open
tarcisiofischer opened this issue Oct 4, 2021 · 3 comments
Labels

Comments

@tarcisiofischer
Copy link

tarcisiofischer commented Oct 4, 2021

According to this discussion, all algorithms should behave the same with gen=N vs N calls with gen=1 (Did I understand correcly?).

I tried with PSO in pygmo, and couldn't make it work (Will explain better after showing the code):

def pso_run_a(problem, seed):
    population = pg.population(pg.problem(problem), N_PARTICLES, seed=seed)
    algorithm = pg.algorithm(
        pg.pso(
            gen=1,
            omega=1,
            eta1=1,
            eta2=1,
            seed=seed,
            memory=True,
        )
    )
    for i in range(N_GENERATIONS):
        population = algorithm.evolve(population)
        print("Champ> ", population.champion_f[0])
def pso_run_b(problem, seed):
    population = pg.population(pg.problem(problem), N_PARTICLES, seed=seed)
    algorithm = pg.algorithm(
        pg.pso(
            gen=N_GENERATIONS,
            omega=1,
            eta1=1,
            eta2=1,
            seed=seed,
            memory=True,
        )
    )
    # algorithm.set_verbosity(1)
    population = algorithm.evolve(population)
    print("Champ> ", population.champion_f[0])
class MyProblem:
    def fitness(self, x):
        return [np.sum(x**2)]

    def get_bounds(self):
        return ([-10, -10], [10, 10])

Results:
pso_run_a: 35.042803177459504
pso_run_b: 14.44962519803876


It seems to be a bug related to the way the population evolves. AFAIU, when returning from the algorithm, PSO will return the best fit ( important code here ), and then, in the next run, X will be in relation to lbX. While when running the code iteratively with multiple generations (and assuming that one generation didn't perform better than the previous, then X and lbX will be different ( important code here )

There are several ways of solving this. The simplier, I believe, is to update keep an internal m_X in the PSO algorithm class. What are your thoughts around this? Are you interest in a specific fix just to the PSO algorithm (I can open a PR)?

Thanks in advance!

@darioizzo
Copy link
Member

Indeed it's a bug. I just noted the equivalence is not tested for pso. Have a look to the test for cmaes here:

BOOST_AUTO_TEST_CASE(cmaes_memory_test)

A similar test should be added also for pso, and it should pass.

Anything that needs to be remembered from previous generations should Indeed be in the algo memory. When memory=true it should be remembered.

A PR would indeed be appreciated ...

@darioizzo
Copy link
Member

In case you do open a PR, could you also make the symmetric modifications to pso_gen?

@tarcisiofischer
Copy link
Author

@darioizzo I didn't make the pso_gen modifications yet because I wanted to show the pso version first, so you can validate if everything is as expected in terms of project patterns, and also your expectation around this. Feel free to ask for modifications.

Also note that I had to make further modifications in order to make all variations of pso to work in the test case. Perhaps you find it to be a little overengineered... Feel free to suggest modifications around the implementation. I tried to make it as explicit as I could to avoid confusion for future readers from the code.

I'm planning on making similar changes (and adding similar test) to pso_gen after you take a look at the PR :)

Thanks in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants