Skip to content

(Yet another) Reinforcement Learning repository that aims to implement and benchmark various RL algorithms

Notifications You must be signed in to change notification settings

moripiri/DiverseRL

Repository files navigation

DiverseRL


DiverseRL is a repository that aims to implement and benchmark reinforcement learning algorithms.

This repo aims to implement algorithms of various sub-topics in RL (e.g. model-based RL, offline RL), in wide variety of environments.

Features

  • Wandb logging
  • Tensorboard

Installation


You can install the requirements by using Poetry.

git clone https://github.com/moripiri/DiverseRL.git
cd DiverseRL

poetry install

Algorithms


Currently, the following algorithms are available.

Model-free Deep RL

Classic RL

Classic RL algorithms that are mostly known by Sutton's Reinforcement Learning: An Introduction. Can be trained in Gymnasium's toy text environments.

  • SARSA
  • Q-learning
  • Model-free Monte-Carlo Control
  • Dyna-Q

Getting Started

Training requires two gymnasium environments(for training and evaluation), algorithm, and trainer.

examples/ folder provides python files for training of each implemented RL algorithms.

# extracted from examples/run_dqn.py
import gymnasium as gym
from diverserl.algos import DQN
from diverserl.trainers import DeepRLTrainer
from diverserl.common.utils import make_envs

env, eval_env = make_envs(env_id='CartPole-v1')

algo = DQN(env=env, **config)

trainer = DeepRLTrainer(
    algo=algo,
    env=env,
    eval_env=eval_env,
)

trainer.run()

Or in examples folder, you can pass configuration parameters from command line arguments.

python examples/run_dqn.py --env-id CartPole-v1

Or yaml files for configurations.

python examples/run_dqn.py --config-path configurations/dqn_classic_control.yaml

About

(Yet another) Reinforcement Learning repository that aims to implement and benchmark various RL algorithms

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages