Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding MPO and DMPO #392

Open
wants to merge 27 commits into
base: master
Choose a base branch
from
Open

Adding MPO and DMPO #392

wants to merge 27 commits into from

Conversation

Jogima-cyber
Copy link

@Jogima-cyber Jogima-cyber commented May 23, 2023

Description

I've started investigating the MPO algorithm family and I wish to do it in a clean RL fashion (benchmarking everything and having one file implementation for each algorithm) as it seems to me to be a good way to make RL algorithms accessible to everyone.

Apart from deepmind's official implementation, which is hard to use or just analyze because they built their library in a modular fashion and rely heavily on calls to other libraries that they've developed and that are used only by them, there are very few resources on these algorithms, and none when it comes to trustworthy resources.

Moreover this family lacks a real benchmark, since in the related papers they only do benchmarking for a segment of algorithms of this family (TD(5) and Retrace, whereas we'd like to see benchmarks for the use of a distributional critic) and this benchmarking is sparse (for the gym mujoco envs it doesn't include all envs, was done on v1, and doesn't compare results with other algorithms than a version of SAC)

Nonetheless I think this family should be thoughtfully investigated because of the following claims in the robotics robotics continuous control domain:
Has a way better sample efficiency than PPO and is as much insensible to hyperparameters tuning as PPO is (the later claim is very important as practitioners in the robotic field usually cannot make DDPG/TD3/SAC work on real robots because of the need for these algorithms to tune a lot the hyperparameters)
Has same sample efficiency as SAC but better asymptotic results
Furthermore the repeated use by deepmind of this family of algorithms in robotics for 5 years, and recently in the quite impressive https://arxiv.org/pdf/2304.13653.pdf paper, may be a signal that this family is actually a very good family of algorithms for robotic continuous control.

Types of changes

  • Bug fix
  • New feature
  • New algorithm
  • Documentation

Checklist:

  • I've read the CONTRIBUTION guide (required).
  • I have ensured pre-commit run --all-files passes (required).
  • I have updated the tests accordingly (if applicable).
  • I have updated the documentation and previewed the changes via mkdocs serve.
    • I have explained note-worthy implementation details.
    • I have explained the logged metrics.
    • I have added links to the original paper and related papers.

If you need to run benchmark experiments for a performance-impacting changes:

  • I have contacted @vwxyzjn to obtain access to the openrlbenchmark W&B team.
  • I have used the benchmark utility to submit the tracked experiments to the openrlbenchmark/cleanrl W&B project, optionally with --capture-video.
  • I have performed RLops with python -m openrlbenchmark.rlops.
    • For new feature or bug fix:
      • I have used the RLops utility to understand the performance impact of the changes and confirmed there is no regression.
    • For new algorithm:
      • I have created a table comparing my results against those from reputable sources (i.e., the original paper or other reference implementation).
    • I have added the learning curves generated by the python -m openrlbenchmark.rlops utility to the documentation.
    • I have added links to the tracked experiments in W&B, generated by python -m openrlbenchmark.rlops ....your_args... --report, to the documentation.

…and distributional critic. Adding partial doc for the first algorithm.
@vercel
Copy link

vercel bot commented May 23, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
cleanrl ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jun 19, 2023 10:03pm

@Jogima-cyber
Copy link
Author

Who should run the benchmarks? And how? Because we'd like to check if we get the same as the acme benchmarks:
deepminds_official_benchmarks

but the evaluation is done differently than cleanrl and training is mostly for 1e7 steps (in clean RL benchmarks this is usually 1e6 if I'm not mistaken).

…: critic targets are averaged across several actions sampled from the target policy. Should be added to dmpo too; TBD.
… 2. q values for policy improvement are taken from the target qf.
@vwxyzjn
Copy link
Owner

vwxyzjn commented May 24, 2023

If it’s possible, you should run the benchmark. Regarding experiment settings, perfect replication is difficult (e.g., do we know their PPO settings?). It’s up to you if you want to use 1e6 or 1e7.

@Jogima-cyber
Copy link
Author

Okay, I'm gonna run the benchmark!

@Jogima-cyber
Copy link
Author

Results obtained with the proposed DMPO.
compare

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants