This is my attempt to use reinforcement learning to solve a Rubik's Cube without the aid of human knowledge.
I'm working on this project as a part of CPSC 674: Advanced Computational Intelligence for Games.
Many of the ideas I will draw upon and try to recreate come from these two papers:
One method this project uses is a technique introduced by Stephen McAleer called Autodidactic Iteration.
First a joint policy and value network is trained via reinforcement learning. Training examples are generated by randomly scrambling Rubik's Cubes, and their labels are created using the network being trained. After a period of time, new training samples are generated and labelled with the updated parameters. This process is repeated several times.
Once the policy and value network is trained, it is used to guide a Monte-Carlo Tree Search to go from the scrambled to the solved state.
This project also implements a method called Deep Approximate Value Iteration by Forest Agostinelli.
First, a Cost-to-Go function is trained via reinforcement learning. Afterwards, it is used as a heuristic for A* Search to try to solve the cube.
You need Python 3 and PyTorch to run this project. It is also recommend you use a CUDA enabled machine.
To see a list of options for this project, clone this repo and run
python3 driver.py --help
To type in scrambles for a pre-trained model to attempt to solve, run something like
python3 driver.py --load models/model_1400.lckpt --solve
The above command will run the autodidactic iteration-based method. To try an approximate value iteration model, run something like
python3 driver.py -avi --load models/model_avi_82.lckpt --solve
Note that as of this time, the model is still in the process of being trained. The time it takes to find solutions will vary depending on the machine it's run on, and it is not yet able to handle scrambles of arbitrary length. On my machine (with a GTX 1070 Ti) it can solve most 12-move scrambles in under half a second.