Skip to content

Latest commit

 

History

History
75 lines (60 loc) · 3.32 KB

README.md

File metadata and controls

75 lines (60 loc) · 3.32 KB

Code style: black

MuZero General

A flexible, commented and documented implementation of MuZero based on the Google DeepMind paper and the associated pseudocode. It is designed to be easily adaptable for every games or reinforcement learning environments (like gym). You only need to edit the game file with the parameters and the game class. Please refer to the documentation and the example.

MuZero is a model based reinforcement learning algorithm, successor of AlphaZero. It learns to master games without knowing the rules. It only knows actions and then learn to play and master the game. It is at least more efficient than similar algorithms like AlphaZero, SimPLe and World Models.

It uses PyTorch and Ray for running the different components simultaneously. There is a complete GPU support.

There are four components which are classes that run simultaneously in a dedicated thread. The shared storage holds the latest neural network weights, the self-play uses those weights to generate self-play games and store them in the replay buffer. Finally, those played games are used to train a network and store the weights in the shared storage. The circle is complete. See How it works

Those components are launched and managed from the MuZero class in muzero.py and the structure of the neural network is defined in models.py.

All performances are tracked and displayed in real time in tensorboard.

lunarlander training preview

Games already implemented with pretrained network available

  • Lunar Lander
  • Cartpole

lunarlander training preview

Getting started

Installation

cd muzero-general
pip install -r requirements.txt

Training

Edit the end of muzero.py:

muzero = Muzero("cartpole")
muzero.train()

Then run:

python muzero.py

To visualize the training results, run in a new bash:

tensorboard --logdir ./

Testing

Edit the end of muzero.py:

muzero = Muzero("cartpole")
muzero.load_model()
muzero.test()

Then run:

python muzero.py

Coming soon

  • Atari mode with residual network
  • Live test policy & value tracking
  • Open spiel integration
  • Checkers game
  • TensorFlow mode

Authors

  • Werner Duvaud
  • Aurèle Hainaut
  • Paul Lenoir