Skip to content

johli/scrambler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DOI

Scrambler Logo

Scrambler Neural Networks

Code for training Scrambler networks, an interpretation method for sequence-predictive models based on deep generative masking. The Scrambler learns to predict maximal-entropy PSSMs for a given input sequence such that downstream predictions are reconstructed (the "inclusion" objective). Alternatively, the Scrambler can be trained to output minimal-entropy PSSMs such that downstream predictions are distorted (the "occlusion" objective).

Scramblers were presented in a MLCB 2020* conference paper, "Efficient inference of nonlinear feature attributions with Scrambling Neural Networks".

*2nd Conference on Machine Learning in Computational Biology, (MLCB 2020), Online.

Contact jlinder2 (at) cs.washington.edu for any questions about the code.

Features

  • Efficient interpretation of sequence-predictive neural networks.
  • High-capacity interpreter based on ResNets.
  • Find multiple salient feature sets with mask dropout.
  • Separate maximally enhancing and repressive features.
  • Fine-tune interpretations with per-example optimization.
  • Supports multiple-input predictor architectures.

Installation

Install by cloning or forking the github repository:

git clone https://github.com/johli/scrambler.git
cd scrambler
python setup.py install

Required Packages

  • Tensorflow == 1.13.1
  • Keras == 2.2.4
  • Scipy >= 1.2.1
  • Numpy >= 1.16.2

Analysis Notebooks

The sub-folder analysis/ contains all the code used to produce the results of the paper.

Example Notebooks

The sub-folder examples/ contains a number of light-weight examples showing the basic usage of the Scrambler package functionality. The examples are listed below.

Images

Interpretating predictors for images.

Notebook 1: Interpreting MNIST Images

RNA

Interpretating predictors for RNA-regulatory biology.

Notebook 2a: Interpreting APA Sequences
Notebook 2b: Interpreting APA Sequences (Custom Loss)
Notebook 3a: Interpreting 5' UTR Sequences
Notebook 3b: Optimizing individual 5' UTR Interpretations
Notebook 3c: Fine-tuning pre-trained 5' UTR Interpretations

Protein

Interpretating predictors for proteins.

Notebook 4a: Interpreting Protein-protein Interactions (inclusion)
Notebook 4b: Interpreting Protein-protein Interactions (occlusion)
Notebook 5a: Interpreting Hallucinated Protein Structures (no MSA)
Notebook 5b: Interpreting Natural Protein Structures (with MSA)

Scrambler Training GIFs

The following GIFs illustrate how the Scrambler network interpretations converge on a few select input examples during training.

WARNING: The following GIFs contain flickering pixels/colors. Do not look at them if you are sensitive to such images.

Alternative Polyadenylation

The following GIF depicts a Scrambler trained to reconstruct APA isoform predictions.

APA GIF

5' UTR Translation Efficiency

The following GIF depicts a Scrambler trained to reconstruct 5' UTR translation efficiency predictions.

UTR5 GIF

Protein-Protein Interactions

The following GIF depicts a Scrambler trained to distort protein interactions predictions (siamese occlusion). Red letters correspond to designed hydrogen bond network positions. The following GIF displays the same interpretation but projected onto the 3D structure of the complex.

Protein GIF

Protein GIF

About

Interpretation by Deep Generative Masking for Biological Sequences

Resources

License

Stars

Watchers

Forks

Packages

No packages published