Code to reproduce experiments from "User-Interactive Offline Reinforcement Learning" (ICLR 2023)
-
Updated
Apr 14, 2023 - Python
Code to reproduce experiments from "User-Interactive Offline Reinforcement Learning" (ICLR 2023)
Implemenation of CORL for Fetch and Unitree A1 tasks
Code for Undergrad Final Year Project “Offline Risk-Averse Actor-Critic with Curriculum Learning”
Codes for "Efficient Offline Policy Optimization with a Learned Model", ICLR2023
Offline to Online RL: AWAC & IQL PyTorch Implementation
オフライン強化学習用フレームワーク及びSCQL,SCQL+Dの実装
Official code for paper: Conservative objective models are a special kind of contrastive divergence-based energy model
Need 4 Speed, FYP 2023-24 @ Monash.
Codes accompanying the paper "On the Role of Discount Factor in Offline Reinforcement Learning" (ICML 2022)
Package for recording Transitions in OpenAI Gym Environments.
Author's repository for GSM8K-AI-SubQ reasoning dataset
Clean single-file implementation of offline RL algorithms in JAX
🧠 Learning World Value Functions without Exploration
Summarising the research of Offline RL in Federated Setting.
Direct port of TD3_BC to JAX using Haiku and optax.
Code for NeurIPS 2023 paper Accountability in Offline Reinforcement Learning: Explaining Decisions with a Corpus of Examples
PyTorch Implementation of Offline Reinforcement Learning algorithms
D2C(Data-driven Control Library) is a library for data-driven control based on reinforcement learning.
The Medkit-Learn(ing) Environment: Medical Decision Modelling through Simulation (NeurIPS 2021) by Alex J. Chan, Ioana Bica, Alihan Huyuk, Daniel Jarrett, and Mihaela van der Schaar.
Code for Continuous Doubly Constrained Batch Reinforcement Learning, NeurIPS 2021.
Add a description, image, and links to the offline-rl topic page so that developers can more easily learn about it.
To associate your repository with the offline-rl topic, visit your repo's landing page and select "manage topics."