Skip to content
/ GVAR Public

An interpretable framework for inferring nonlinear multivariate Granger causality based on self-explaining neural networks.

Notifications You must be signed in to change notification settings

i6092467/GVAR

Repository files navigation

Interpretable Models for Granger Causality Using Self-explaining Neural Networks

Introduction

Exploratory analysis of time series data can yield a better understanding of complex dynamical systems. Granger causality is a practical framework for analysing interactions in sequential data, applied in a wide range of domains. We propose a novel framework for inferring multivariate Granger causality under nonlinear dynamics based on an extension of self-explaining neural networks. This framework is more interpretable than other neural-network-based techniques for inferring Granger causality, since in addition to relational inference, it also allows detecting signs of Granger-causal effects and inspecting their variability over time. In comprehensive experiments on simulated data, we show that our framework performs on par with several powerful baseline methods at inferring Granger causality and that it achieves better performance at inferring interaction signs. The results suggest that our framework is a viable and more interpretable alternative to sparse-input neural networks for inferring Granger causality.

Relational inference in time series:

relational inference

In addition to structure, our approach allows inferring Granger-causal effect signs:

interpretable relational inference

The overall summary of the proposed framework:

inference framework summary

This project iplements an autoregressive model for inferring Granger causality based on self-explaining neural networks – generalised vector autoregression (GVAR). The description of the model, inference framework, experiments, comparison to baselines, and ablations can be found in the ICLR 2021 paper. A short explanation of the method is provided in this talk. The poster is available here.

Requirements

All the libraries required are in the conda environment environment.yml. To install it, follow the instructions below:

conda env create -f environment.yml   # install dependencies
conda activate SENGC                  # activate environment

Note, that the current implementation of GVAR requires a GPU supported by CUDA 10.1.0.

Experiments

/bin folder contains shell scripts for the three simulation experiments described in the paper (all arguments are given in the scripts):

  • Lorenz 96: run_grid_search_lorenz96
  • fMRI: run_grid_search_fMRI
  • Lotka–Volterra: run_grid_search_lotka_volterra

The data used to generate results in the paper are stored in the folder datasets/experiment_data.

Further details are documented within the code.

Acknowledgements

Code for the baseline models, apart from VAR, is not included into this project and is available in the following repositories:

Authors

References

Below are some references helpful for understanding our method:

  • C. W. J. Granger. Investigating causal relations by econometric models and cross-spectral methods. Econometrica, 37(3):424–438, 1969.
  • A. Arnold, Y. Liu, and N. Abe. Temporal causal modeling with graphical Granger methods. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’07, pages 66–75, 2007.
  • L. Song, M. Kolar, and E. Xing. Time-varying dynamic Bayesian networks. In Advances in Neural Information Processing Systems 22, pp. 1732–1740. Curran Associates, Inc., 2009.
  • A. Tank, I. Covert, N. Foti, A. Shojaie, and E. Fox. Neural Granger causality for nonlinear time series, 2018. arXiv:1802.05842.
  • D. Alvarez-Melis and T. Jaakkola. Towards robust interpretability with self-explaining neural networks. In Advances in Neural Information Processing Systems 31, pp. 7775–7784. Curran Associates, Inc., 2018.

Citation

@inproceedings{Marcinkevics2021,
  title={Interpretable Models for Granger Causality Using Self-explaining Neural Networks},
  author={Ri{\v{c}}ards Marcinkevi{\v{c}}s and Julia E Vogt},
  booktitle={International Conference on Learning Representations},
  year={2021},
}

Releases

No releases published

Packages

No packages published