Skip to content

pfnet/pfrl

Repository files navigation

PFRL

Documentation Status PyPI

PFRL is a deep reinforcement learning library that implements various state-of-the-art deep reinforcement algorithms in Python using PyTorch.

Boxing Humanoid Grasping Atlas SlimeVolley

Installation

PFRL is tested with Python 3.7.7. For other requirements, see requirements.txt.

PFRL can be installed via PyPI:

pip install pfrl

It can also be installed from the source code:

python setup.py install

Refer to Installation for more information on installation.

Getting started

You can try PFRL Quickstart Guide first, or check the examples ready for Atari 2600 and Open AI Gym.

For more information, you can refer to PFRL's documentation.

Blog Posts

Algorithms

Algorithm Discrete Action Continuous Action Recurrent Model Batch Training CPU Async Training Pretrained models*
DQN (including DoubleDQN etc.) ✓ (NAF) x
Categorical DQN x x x
Rainbow x x
IQN x x
DDPG x x x
A3C ✓ (A2C)
ACER x x
PPO x
TRPO x
TD3 x x x
SAC x x x

*Note on Pretrained models: PFRL provides pretrained models (sometimes called a 'model zoo') for our reproducibility scripts on Atari environments (DQN, IQN, Rainbow, and A3C) and Mujoco environments (DDPG, TRPO, PPO, TD3, SAC), for each benchmarked environment.

Following algorithms have been implemented in PFRL:

Following useful techniques have been also implemented in PFRL:

Environments

Environments that support the subset of OpenAI Gym's interface (reset and step methods) can be used.

Contributing

Any kind of contribution to PFRL would be highly appreciated! If you are interested in contributing to PFRL, please read CONTRIBUTING.md.

License

MIT License.

Citations

To cite PFRL in publications, please cite our paper on ChainerRL, the library on which PFRL is based:

@article{JMLR:v22:20-376,
  author  = {Yasuhiro Fujita and Prabhat Nagarajan and Toshiki Kataoka and Takahiro Ishikawa},
  title   = {ChainerRL: A Deep Reinforcement Learning Library},
  journal = {Journal of Machine Learning Research},
  year    = {2021},
  volume  = {22},
  number  = {77},
  pages   = {1-14},
  url     = {http://jmlr.org/papers/v22/20-376.html}
}