Skip to content

Latest commit

 

History

History
89 lines (66 loc) · 4.52 KB

README.md

File metadata and controls

89 lines (66 loc) · 4.52 KB

Introduction

This repository contains code related to the ICML 2019 paper, Efficient Amortised Bayesian Inference for Hierarchical and Nonlinear Dynamical Systems.

VI-HDS is a a flexible, scalable Bayesian inference framework for nonlinear dynamical systems characterised by distinct and hierarchical variability at the individual, group, and population levels. We cast parameter inference as stochastic optimisation of an end-to-end differentiable, block-conditional variational autoencoder. We specify the dynamics of the data-generating process as an ordinary differential equation (ODE) such that both the ODE and its solver are fully differentiable. This model class is highly flexible: the ODE right-hand sides can be a mixture of user-prescribed or “white-box” sub-components and neural network or “black-box” sub-components. Using stochastic optimisation, our amortised inference algorithm could seamlessly scale up to massive data collection pipelines (common in labs with robotic automation).

Citation

If you use this code or build upon it, please use the following (bibtex) citation:

@InProceedings{
	title = "Efficient Amortised Bayesian Inference for Hierarchical and Nonlinear Dynamical Systems",
	author = "Geoffrey Roeder and Paul K Grant and Andrew Phillips and Neil Dalchau and Edwards Meeds",
	booktitle = "International Conference on Machine Learning (ICML 2019)",
	year = "2019"
}

Dependencies

  • TensorFlow, a deep learning framework
  • NumPy, numerical linear algebra for Python
  • Pandas, data analysis and data structures
  • CUDA, a parallel computing framework. It's not essential, as the code can run (albeit, more slowly) in CPU mode.

To install the python dependencies, you can use pip with the requirements.txt file. We have verified that VI-HDS runs on Tensorflow v1.13.1. For GPU support, you'll require CUDA v10.0.

Running an example

  1. Ensure the src directory is on your python path. Also, optionally, set the environment variables INFERENCE_DATA_DIR and INFERENCE_RESULTS_DIR to the directories to which data will be read and results will be written, and export it. By default, these are set to local paths "data" (built-in data files are stored here) and "results" (already in the .gitignore file) respectively.

    In Linux:

    export PYTHONPATH=.
    export INFERENCE_DATA_DIR=data
    export INFERENCE_RESULTS_DIR=results

    In Windows:

    set PYTHONPATH=.
    set INFERENCE_DATA_DIR=data
    set INFERENCE_RESULTS_DIR=results
    
  2. Run the dr_constant_icml example by calling:

    python src/run_xval.py --experiment=EXAMPLE specs/dr_constant_xval.yaml 
  3. Run tensorboard to visualise the output. A folder will be created in your user-specified results directory with a name that combines the EXAMPLE name and a timestamp. E.g.

    tensorboard --logdir=EXAMPLE_20181123T174132369485

    TensorBoard uses port 6006 by default, so you can then visualise your example at http://localhost:6006. Alternatively, you can specify another port.

Running tests

We make use of the pytest library to run tests.

In Windows:

set PYTHONPATH=.
pytest tests

Contact

E-mail us directly

We also have a project page here.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.