Skip to content

Set of reinforcement learning environments for optical networks

License

Notifications You must be signed in to change notification settings

carlosnatalino/optical-rl-gym

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Optical RL-Gym

OpenAI Gym is the de-facto interface for reinforcement learning environments. Optical RL-Gym builds on top of OpenAI Gym's interfaces to create a set of environments that model optical network problems such as resource management and reconfiguration. Optical RL-Gym can be used to quickly start experimenting with reinforcement learning in optical network problems. Later, you can use the pre-defined environments to create more specific environments for your particular use case.

Please use the following bibtex:

@inproceedings{optical-rl-gym,
  title = {The {Optical RL-Gym}: an open-source toolkit for applying reinforcement learning in optical networks},
  author = {Carlos Natalino and Paolo Monti},
  booktitle = {International Conference on Transparent Optical Networks (ICTON)},
  year = {2020},
  location = {Bari, Italy},
  month = {July},
  pages = {Mo.C1.1},
  doi = {10.1109/ICTON51198.2020.9203239},
  url = {https://github.com/carlosnatalino/optical-rl-gym}
}

The authors' version is available here.

Features

Across all the environments, the following features are available:

  • Use of NetworkX for the topology graph representation, resource and path computation.
  • Uniform and non-uniform traffic generation.
  • Flag to let agents proactively reject requests or not.
  • Appropriate random number generation with seed management providing reproducibility of results.

Content of this document

  1. Installation
  2. Environments
  3. Examples
  4. Resources
  5. Contributors
  6. Contact

Installation

This tool uses the Gym version 0.21. For compatibility reasons, we use Python version 3.9. Here are the recommended steps to install the tool (as of April 2024):

  1. Download the repository in your machine. To do this, open the terminal and go to the folder where you want to have the tool downloaded. Then run:
git clone https://github.com/carlosnatalino/optical-rl-gym.git
cd optical-rl-gym
  1. Create a virtual environment:
python3.9 -m venv .venv
  1. Activate the virtual environment:
source .venv/bin/activate
  1. Install legacy versions of setuptools and pip (more info about why we need this available in stack overflow):
pip install setuptools==65.5.0 pip==21
  1. Now you can install the Optical RL-Gym with:
pip install -e ".[dev]"

You will be able to run the examples right away.

You can see the dependencies in the setup.py file.

To traing reinforcement learning agents, you must create or install reinforcement learning agents. Here are some of the libraries containing RL agents:

Environments

At this moment, the following environments are ready for use:

  1. RWAEnv
  2. RMSAEnv
  3. DeepRMSA

More environments will be added in the near future.

Examples

Training a RL agent for one of the Optical RL-Gym environments can be done with a few lines of code.

For instance, you can use a Stable Baselines agent trained for the RMSA environment:

# define the parameters of the RMSA environment
env_args = dict(topology=topology, seed=10, allow_rejection=False, 
                load=50, episode_length=50)
# create the environment
env = gym.make('RMSA-v0', **env_args)
# create the agent
agent = PPO2(MlpPolicy, env)
# run 10k learning timesteps
agent.learn(total_timesteps=10000)

We provide a set of examples.

Resources

Contributors

Here is a list of people who have contributed to this project:

Contact

This project is maintained by Carlos Natalino [Twitter], who can be contacted through carlos.natalino@chalmers.se.