This repo contains the code to reproduce all results of the paper Learning Graph Algorithms with Recurrent Graph Neural Networks.
To cite our work:
@article{AlgorithmicRecurrentGNN2022,
title = {Learning Graph Algorithms With Recurrent Graph Neural Networks},
author = {Grötschla, Florian and Mathys, Joël and Wattenhofer, Roger},
url = {https://arxiv.org/abs/2212.04934},
publisher = {arXiv},
year={2022}
}
The necessary conda environment can be set up as follows:
conda env create -f environment.yml
conda activate RecGNN
To run all the configurations used in the paper, we first create configuration files for every run:
python create_configs.py
This will create three different directories with json configs for the RecGNN, IterGNN and GIN baseline runs.
Once the configs are generated, they can be passed to the run_experiment.py
script that will run them.
Model checkpoints will be stored in models/
and stdout of the script can be redirected to a separate file for every run.
This enables us to easily collect the results later on.
First, create the models/
directory and directories for the output of every model:
mkdir models
mkdir runs_recGNN
mkdir runs_itergnn
mkdir runs_gin
Afterwards, all experiments for RecGNN can be run with:
CONFIG_ID=0
for FILE in configs_recGNN/*
do
echo Run RecGNN config $CONFIG_ID
python run_experiment.py --config $FILE > runs_recGNN/output_$CONFIG_ID.out
(( CONFIG_ID++ ))
done
For IterGNN, you can do:
CONFIG_ID=0
for FILE in configs_itergnn/*
do
echo Run IterGNN config $CONFIG_ID
python run_experiment.py --config $FILE > runs_itergnn/output_$CONFIG_ID.out
(( CONFIG_ID++ ))
done
And for GIN:
CONFIG_ID=0
for FILE in configs_gin/*
do
echo Run GIN config $CONFIG_ID
python run_experiment.py --config $FILE > runs_gin/output_$CONFIG_ID.out
(( CONFIG_ID++ ))
done
Now that the run directories are populated with the output from the script we can collect the results as follows:
python collect_results.py runs_recGNN output_recGNN # For RecGNN
python collect_results.py runs_itergnn output_itergnn # For IterGNN
python collect_results.py runs_gin output_gin # For GIN
The collect_results.py
script scans all files in the provided directory and creates summary csv's. After running the code above, we get output_recGNN.csv
, output_itergnn.csv
and output_gin.csv
with information for every run and training epoch.
To run the extrapolation tests, do (this requires the output files and models from the trainings):
# RecGNN
python run_extrapolation.py output_recGNN.csv gin-mlp
python run_extrapolation.py output_recGNN.csv gru-mlp
# IterGNN
python run_extrapolation_itergnn.py
# GIN
python run_extrapolation_gin.py
These scripts will run the extrapolation on bigger graphs and write the results to output_extrapolation_gin-mlp.csv
, output_extrapolation_gru-mlp.csv
, output_extrapolation_itergnn.csv
and output_extrapolation_gin.csv
respectively.
For the stabilization runs, we have the following scripts:
python run_stabilization.py gin-mlp
python run_stabilization.py gru-mlp
Results will be written to output_stabilization_gin-mlp.csv
and output_stabilization_gru-mlp.csv
.
After the above runs finished successfully, the figures presented in the paper can be generated by running the figures notebook.