Skip to content

aksarkar/frea-pipeline

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 

Repository files navigation

Functional Region Enrichment Analysis

This respository contains the computational pipeline used to perform the analysis presented in:

  • Sarkar, A. K., Ward, L. D., & Kellis, M. (2016). Functional enrichments of disease variants across thousands of independent loci in eight diseases. bioRxiv. http://dx.doi.org/10.1101/048066

Dependencies

Commentary

The pipeline relies on many pieces of software and data generated in the Kellis lab with varying degrees of reproducibility, and also on some specifics of our computing environment. This repository is a best-effort attempt to publicly release as much of the analysis as possible; however, certain aspects requiring re-implementation to be fully reproducible are elaborated on below.

Running the code

Importantly, this repository does not include code to farm work out to compute nodes in a cluster since this is highly dependent on the cluster environment. The basic idea is to generate a list of tasks in files named like “joblist”, then use array jobs in PBS/Torque/GridEngine/LSF to run them in parallel. This way is more flexible in maximizing the use of available resources.

In Univa Grid Engine, this is a simple but highly flexible implementation:

awk -vn=$SGE_TASK_LAST -vi=$SGE_TASK_ID 'NR % n == i - 1' $1 | parallel --joblog $JOB_NAME.$JOB_ID.$SGE_TASK_ID.joblog -j1 --halt now,fail,1

Other cluster managers set similar environment variables for array jobs. The script parses the array job parameters and loops over an appropriate subset of the lines of the first argument (corresponding to tasks). Error checking is done by parsing the joblog files and constructing joblists from failed tasks.

awk -vFS='\t' '$1 != "Seq" && ($7 != 0 || $8 != 0) {print $9}' $*

Getting the reference data

This is implemented in several pieces. The key tasks are:

  1. Download Roadmap Epigenomics data and generate a directory structure of BED files containing the enhancer annotations and a directory containing the gene expression RPKM (for studying transcription factor expression)
  2. Download ENCODE data and prepare files for motif enrichment
  3. Download the Thousand Genomes reference (for SHAPEIT2/IMPUTE2 imputation in WTCCC and summary statistic imputation) and compute LD information between all SNPs in 1KG
make -C data/roadmap core honeybadger
make -C data/roadmap/core-features
make -C data/roadmap/hb-features
make -C data/roadmap/expression

The required directory structure for annotations is *features/{annotation}/{id}.bed.gz. The naming is used to allow distributing work (script frea-rrplot), then re-assembling the summarized result (R function frea::plot_rrplot).

We include \b in the grep pattern Enh\b because the ChromHMM core model includes a “genic enhancer” state labelled EnhG which shows transcription marks as well as enhancer marks. These are generally within genes and can complicate the interpretation of the results.

make -C data/encode

The ENCODE data is mostly public, with the exception of hierarchical clustering of PWMs. The similarity measure was published, but the clustering itself (cut at k=300 clusters) was not, so we include it in the repository.

make -C data/1kg stage
make -C data/1kg
make -C data/mask

We use the Thousand Genomes reference in OXSTATS format to handle summary statistics from the Bradfield et al., 2011 studies in the main paper. These require odds ratios estimated from the WTCCC cohort since the meta-analyzed odds ratios were not published.

EUR+.txt.gz is pairwise LD using another custom program implemented in the Kellis lab. We do not include the LD information as a public download since it requires approximately 20 GB of disk space. In principle this can be done using vcftools instead:

vcftools ... --hap-r2 --ld-window-bp 1000000 --min-r2 0.1

However, a wrapper will be needed to format the output correctly. The format for EUR+.txt.gz is tab-separated plaintext:

rs189107123|chr1|10611|10611|G  rs144762171|chr1|13327|13327|C	0.286308        0.739698

The first two columns are a pair of variants, followed by r^2 and D. The names are formatted as rsid, chromosome, start (inclusive), end (inclusive), allele substitution. The Python function frea.formats.get_pouyak_name generates these from the 1KG reference information.

We keep a BED file containing the MHC for use later in the pipeline.

make -C data/gwas-summary-stats ...

This repository does not include the GWAS data since we are not allowed to share the WTCCC1 individual-level data, and the remaining data is publicly available. Direct links are given in data/gwas-summary-stats/Makefile when available.

The Stahl et al. 2012 study of RA is on hg17, so it requires lifting over before imputation. frea.summary.process.ra() outputs a BED file with hg17 positions, which should be processed using UCSC liftOver and the appropriate chain file:

http://hgdownload.cse.ucsc.edu/goldenPath/hg17/liftOver/hg17ToHg19.over.chain.gz

Summary statistics from the Bradfield et al., 2011 study of T1D require registering on http://www.t1dbase.org/, and then downloading from https://www.t1dbase.org/downloads/protected_data/GWAS_Data/. Imputing these to 1KG requires WTCCC since they did not publish odds ratios.

Imputing Wellcome Trust Case Control Consortium genotypes

The imputation pipeline is implemented in two separate Makefiles: one for QC and pre-phasing, and one for imputation and post-processing. The pipeline is designed this way because downstream analyses require both the QC’ed observations and the imputed probabilities.

The WTCCC genotypes are downloadable through EGA (http://www.wtccc.org.uk/info/access_to_data_samples.html). We assume the data is in IMPUTE/SNPTEST format (http://www.wtccc.org.uk/info/data_formats.html), and convert to IMPUTE2/SNPTEST2 format (http://www.stats.ox.ac.uk/~marchini/software/gwas/file_format.html) in our implementation. We provide a manifest of files we used in our analysis in data/wtccc/haplotypes/manifest.

We make a number of critical choices in our pipeline:

  1. Pre-imputation QC removes only the samples/SNPs in the provided exclusion list (we do not impose more stringent thresholds on e.g. missingness)
  2. We convert hg17 names/positions/strands to our identifiers (see below), hg19 positions, and + strand using published information
  3. Pre-phasing is done on each cohort independently to reduce computational burden
  4. Eigendecomposition for population structure is done on observed genotypes only
make -C data/arrays
cd data/wtccc/haplotypes
# Symlink to the data (elided; analagous to make pre)
make joblist{,.1}
# Run the tasks in joblist, then joblist.1 (elided)

# Now perform the eigendecomposition. This could be parallelized further by
# making individual targets, e.g. make T1D.eigenvec
make
cd data/wtccc/1kg-phase1
# We performed imputation in separate stages to save on space storing the
# output, e.g. run everything in T1D.joblist in temporary space (make sure to
# run make DATA=... to correctly set the data directory in that case), then
# verify average concordance (interactively), compress, and copy to the final
# location
make pre
make joblist
# Run everything in T1D.joblist (elided)
make T1D-concordance
make T1D-gz
make T1D-install
cd results/wtccc/snptest
make pre
make T1D.joblist
# Run everything in T1D.joblist (elided)

Imputing summary statistics

In general, imputing summary statistics requires additional code to coerce data into the right format. The Python subpackage frea.summary contains some utility functions which can simplify doing this.

We re-implement the ImpG pipeline since we use the Thousand Genomes reference in OXSTATS format to handle one of the studies in the main paper. This can be replaced with the scripts provided in the original implementation.

make -C results/impg/maps joblist{,.1}
# Run everything in joblist, then joblist.1 (elided)
make -C results/impg/haps joblist{,.1}
# Run the jobs as above

The generation of reference maps and re-coded haplotypes can be done once and re-used. For each study, imputation proceeds like so:

# Prepare the files in ImpG format (elided)
make -C results/impg/in joblist
# Run the jobs (elided)
make -C results/impg/beta joblist
# Run the jobs (elided)
make -C results/impg/out joblist
# Run the jobs (elided)
make -C results/impg/out STUDY.bed.gz

The summary statistic format to take forward in the analysis is BED format, with the unique identifier described above in the name column (4) and -log10(p) in the score column (5).

Performing the analysis

The analysis is implemented across multiple Makefiles; however, the main purpose of the Makefiles is to generate joblists which can be run using a wrapper script as described above.

make -C results/rrplot/core-features/all-traits joblist
# Run the jobs (elided), then concatenate the output
make -C results/rrplot/core-features/all-traits Enh.txt.gz
R --vanilla --quiet <<EOF
frea::plot_rrplot('.../results/rrplot/core-features/all-traits/Enh.txt.gz', 'SNP rank by p-value', 25000)
EOF
make -C results/rrplot/core-features/all-traits cutoffs

The key tasks here are to run frea-rrplot for each GWAS study and annotation to compute cumulative enrichment (in parallel on a cluster), then re-assemble the output to plot everything together and output the heuristic p-value/rank cutoff used in the remaining analysis (R function frea::plot_rrplot).

For the paper, we restricted this analysis to the common set of variants across all phenotypes studied (all-studies-impg.bed.gz); however, this is not essential for performing the analysis on new data and can be safely removed.

# This can be at any desired threshold
parallel make THRESH=0.8 GWAS={} -C results/ld/pruned ::: .../results/impg/out/*.bed.gz
make -C results/rrplot/core-features/pruned joblist
# Run the jobs (elided), then concatenate the output
make -C results/rrplot/core-features/pruned Enh.txt.gz
R --vanilla --quiet <<EOF
frea::plot_rrplot('.../results/rrplot/core-features/pruned/Enh.txt.gz', 'Independent loci by p-value', 5000)
EOF

To perform LD pruning, we shard the data into per-chromosome files, prune each chromosome (in parallel), then re-assemble the result. Then, we generate an analagous list of tasks running frea-rrplot and reassemble the result.

make -C data/ld
make -C results/matched table.txt.gz
make FEATURES=hb-features -C results/matched hb-features/joblist
# Run the jobs (elided), then concatenate the output to EnhClusters.in
make FEATURES=hb-features -C results/matched EnhClusters-significant.in
R --vanilla --quiet <<EOF
data(honeybadger_cluster_density, package='frea')
frea::plot_enhancer_enrichments('.../results/matched/hb-features/EnhClusters-significant.in', honeybadger_cluster_density)
EOF

The key tasks here are to generate the list of tasks running frea-matched for each GWAS study and annotation (in parallel), then re-assembling the result.

make -C results/great bed
# Upload the BED files to a public server (elided)
make -C results/great joblist
# Run the jobs (elided)
make -C results/great new-genes.txt known-genes.txt summary

Pathway analysis using GREAT requires putting the generated BED files (make bed) on a publicly accessible HTTP server due to limits on upload size. Alternatively, one could build a local copy of GREAT (however, we did not successfully compile the software in our environment).

make -C results/motifs/by-inst enriched joblist{,.1}
# Run everything in joblist, then joblist.1 (elided)
make -C results/motifs/by-inst all-enrichments-summary.txt.gz known-tfs.txt
R --vanilla --quiet <<EOF
frea::plot_motif_enrichments(gzfile('.../results/motifs/by-inst/all-enrichments-summary.txt.gz'))
data(honeybadger_cluster_density, pkg='frea')
frea::plot_master_regulator_counts(gzfile('.../results/motifs/by-inst/all-enrichments-summary.txt.gz'), honeybadger_cluster_density)
EOF
make -C results/motifs/by-inst all-enrichments-network.pdf

The motif enrichment pipeline depends on software developed by Pouya Kheradpour. Instructions to compile these are available at http://compbio.mit.edu/pouyak/.

About

Pipeline for Functional Region Enrichment Analysis

Resources

Stars

Watchers

Forks

Packages

No packages published