OmniSafe is an infrastructural framework for accelerating SafeRL research.
-
Updated
May 16, 2024 - Python
OmniSafe is an infrastructural framework for accelerating SafeRL research.
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
NeurIPS 2023: Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark
NeurIPS 2023: Safe Policy Optimization: A benchmark repository for safe reinforcement learning algorithms
Multi-Agent Constrained Policy Optimisation (MACPO; MAPPO-L).
The Source code for paper "Optimal Energy System Scheduling Combining Mixed-Integer Programming and Deep Reinforcement Learning". Safe reinforcement learning, energy management
Open-source reinforcement learning environment for autonomous racing — featured as a conference paper at ICCV 2021 and as the official challenge tracks at both SL4AD@ICML2022 and AI4AD@IJCAI2022. These are the L2R core libraries.
The Verifiably Safe Reinforcement Learning Framework
LAMBDA is a model-based reinforcement learning agent that uses Bayesian world models for safe policy optimization
Implementation of PPO Lagrangian in PyTorch
Code for "Constrained Variational Policy Optimization for Safe Reinforcement Learning" (ICML 2022)
Implementations of SAILR, PDO, and CSC
Safe Pontryagin Differentiable Programming (Safe PDP) is a new theoretical and algorithmic safe differentiable framework to solve a broad class of safety-critical learning and control tasks.
Safe Multi-Agent Isaac Gym benchmark for safe multi-agent reinforcement learning research.
Reading list for adversarial perspective and robustness in deep reinforcement learning.
Repository containing the code for the paper "Safe Model-Based Reinforcement Learning using Robust Control Barrier Functions". Specifically, an implementation of SAC + Robust Control Barrier Functions (RCBFs) for safe reinforcement learning in two custom environments
Training (hopefully) safe agents in gridworlds
Reinforcement Learning Course Project - IIT Bombay Fall 2018
Code for the paper Stability-guaranteed reinforcement learning for contact-rich manipulation, IEEE RA-L, 2020.
[Humanoids 2022] Learning Collision-free and Torque-limited Robot Trajectories based on Alternative Safe Behaviors
Add a description, image, and links to the safe-reinforcement-learning topic page so that developers can more easily learn about it.
To associate your repository with the safe-reinforcement-learning topic, visit your repo's landing page and select "manage topics."