Skip to content

My attempt to reproduce a water down version of PBT (Population based training) for MARL (Multi-agent reinforcement learning) using DDPPO (Decentralized & distributed proximal policy optimization) from ray[rllib].

License

ChuaCheowHuan/PBT_MARL_watered_down

Repository files navigation

PBT_MARL_watered_down

What's in this repo?

My attempt to reproduce a water down version of PBT (Population based training) for MARL (Multi-agent reinforcement learning) inspired by Algorithm 1 (PBT-MARL) on page 3 of this paper[1].

MAIN differences from the paper:

(1) A simple 1 VS 1 RockPaperScissorsEnv environment (adapted & modified from a toy example from ray) is used instead of the 2 VS 2 dm_soccer.

(2) PPO is used instead of SVG0.

(3) No reward shaping.

(4) The evolution eligibility documented in B2 on page 16 in the paper[1] is not implemented.

(5) Probably many more...

What works?

(1) Policies weights can be inherited between different agents in the population.

(2) Learning rate & gamma are the only 2 hyperparameters involved for now. Both can be inherited/mutated. Learning rate can be resampled/perturbed while gamma can only be resampled. Both hyperparameters changes are verifiable in tensorboard.

Simple walkthru:

Before each training iteration, the driver (in this context, the main process, this is also where the RLlib trainer resides) randomly selects a pair of agents (agt_i, agt_j, where i != j) from a population of agents. This i, j pair will take up the role of player_A & player_B respectively.

The IDs of i,j will be transmitted down to the worker processes. Each worker has 1 or more environments (vectorized) & does it's own rollout. When an episode is sampled (that's when a match ends), the on_episode_end callback will be called. That's when the ratings of a match are computed & updated to a global storage.

When enough samples are collected, training starts. Training is done using RLlib's DDPPO (a variant of PPO). In DDPPO, learning does not happened in the trainer. Each worker does it's own learning. However, the trainer is still involved in the weight sync.

When a training iteration completes, on_train_results callback will be called. That's where inheritance & mutation happens (if conditions are fulfilled).

All of the above happens during 1 single main training loop of the driver. Rinse & repeat.

Note: Global coordination between different processes is done using detached actors from ray.

Example of what's stored in the global storage:

"""
{'agt_0':
    {'hyperparameters':
        {'lr': [0.0027558, 0.0022046, ...]},
         'gamma': [0.9516804908336309, 0.9516804908336309, ...]
     'opponent': ['NA', 'agt_5', 'agt_5', ...],
     'score': [0, -4.0, -2.0, ...],
     'rating': [0.0, 0.05, 0.05, ...],
     'step': [0]},
 'agt_1': ...
    .
    .
    .
 'agt_n': ...    
}
"""

How to run the contents in this repo?

The easiest way is to run the PBT_MARL_watered_down.ipynb Jupyter notebook in Colab.

Dependencies:

This is developed & tested in Colab.

ray[rllib] > 0.8.6 or lastest wheels for ray, won't work with ray <= 0.8.6

tensorflow==2.3.0

Disclaimer:

(1) I'm not affiliated with any of the authors of the paper[1].

References:

[1] EMERGENT COORDINATION THROUGH COMPETITION (Liu et al., 2019)

About

My attempt to reproduce a water down version of PBT (Population based training) for MARL (Multi-agent reinforcement learning) using DDPPO (Decentralized & distributed proximal policy optimization) from ray[rllib].

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published