Reinforcement Learning Course Project - IIT Bombay Fall 2018
-
Updated
Nov 25, 2018 - Python
Reinforcement Learning Course Project - IIT Bombay Fall 2018
Reinforcement Learning Course Project - IIT Bombay Fall 2018
Training (hopefully) safe agents in gridworlds
The Verifiably Safe Reinforcement Learning Framework
Implementations of SAILR, PDO, and CSC
Safe Policy Optimization with Local Features
OpenAI Gym environment emphasizing Partially Observable Dynamic physics-based fast cross-platform 2D world
Code for the paper Stability-guaranteed reinforcement learning for contact-rich manipulation, IEEE RA-L, 2020.
Code for the paper Learning Deep Energy Shaping Policies for Stability-Guaranteed Manipulation, IEEE RA-L, 2021
Repository containing the code for the paper "Safe Model-Based Reinforcement Learning using Robust Control Barrier Functions". Specifically, an implementation of SAC + Robust Control Barrier Functions (RCBFs) for safe reinforcement learning in two custom environments
Poster about Curriculum Induction for Safe Reinforcement Learning
Safe Pontryagin Differentiable Programming (Safe PDP) is a new theoretical and algorithmic safe differentiable framework to solve a broad class of safety-critical learning and control tasks.
Code for the paper Learning Stable Normalizing-Flow Control for Robotic Manipulation, IEEE ICRA, 2021
Implementation of PPO Lagrangian in PyTorch
Safe Multi-Agent Robosuite benchmark for safe multi-agent reinforcement learning research.
[Humanoids 2022] Learning Collision-free and Torque-limited Robot Trajectories based on Alternative Safe Behaviors
LAMBDA is a model-based reinforcement learning agent that uses Bayesian world models for safe policy optimization
[IROS 22'] Model-free Neural Lyapunov Control
Safe Multi-Agent Isaac Gym benchmark for safe multi-agent reinforcement learning research.
Towards Safe Reinforcement Learning via Constraining Conditional Value at Risk (IJCAI 2022)
Add a description, image, and links to the safe-reinforcement-learning topic page so that developers can more easily learn about it.
To associate your repository with the safe-reinforcement-learning topic, visit your repo's landing page and select "manage topics."