Skip to content

Training model-free RL algorithms with and without physics gradients

Notifications You must be signed in to change notification settings

kirilllzaitsev/robot-control-with-diffsim

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Robot control with differentiable simulation vs model-free RL

Introduction

In this project, we compare the performance of model-free reinforcement learning (RL) and model-based RL using differentiable simulation. We use the PyBullet framework for the simulation and the PyTorch framework for the RL algorithms. Differentiable simulation is implemented using the Tiny differentiable simulator.

Setup

Install dependencies

pip install -r requirements.txt

Get the necessary meshes and URDFs

Validate installation

To test if RL loads properly, please use main.py files in:

  1. single_action directory - single-joint control task.
  2. the root directory - multi-joint control task.

Run the simulation

Using the Makefile, you can run two control tasks in either training or evaluation mode.

Task 1

Single joint control of the robot arm to throw a ball as far as possible. This task is meant for debugging purposes.

Task 2

Double-joint control to throw a ball such that it hits a brick on the table.

Experiments

Modifying contents of the agent_params.json and using the train script you could evaluate how different hyperparameters affect the performance of the robot. RL algorithms used in this project are PPO and SAC. Both appeared to be extremely sensitive to the choice of hyperparameters.

Acknowledgements

About

Training model-free RL algorithms with and without physics gradients

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published