Skip to content

phac-nml/mikrokondo

Repository files navigation

Logo

Nextflow

run with docker run with singularity

Table of contents generated with markdown-toc

Introduction

What is mikrokondo?

Mikrokondo is a tidy workflow for performing routine bioinformatic tasks like, read pre-processing, assessing contamination, assembly and quality assessment of assemblies. It is easily configurable, provides dynamic dispatch of species specific workflows and produces common outputs.

Is mikrokondo right for me?

Mikrokondo is purpose built to provide sequencing and clinical laboratories with an all encompassing workflow to provide a standardized workflow that can provide the initial quality assessment of sequencing reads and assemblies, and initial pathogen-specific typing. It has been designed to be configurable so that new tools and quality metrics can be easily incorporated into the workflow to allow for automation of these routine tasks regardless of pathogen of interest. It currently accepts Illumina, Nanopore or Pacbio (Pacbio data only partially tested) sequencing data. It is capable of hybrid assembly or accepting pre-assembled genomes.

This workflow will detect what pathogen(s) is present and apply the applicable metrics and genotypic typing where appropriate, generating easy to read and understand reports. If your group is regularly sequencing or analyzing genomic sequences, implementation of this workflow will automate the hands-on time time usually required for these common bioinformatic tasks.

Citation

This software (currently unpublished) can be cited as:

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

Contact

[Matthew Wells] : matthew.wells@phac-aspc.gc.ca

Installing mikrokondo

Step 1: Installing Nextflow

Nextflow is required to run mikrokondo (requires Linux), and instructions for its installation can be found at either: Nextflow Home or Nextflow Documentation

Step 2: Choose a Container Engine

Nextflow and Mikrokondo only supports running the pipeline using containers such as: Docker, Singularity (now apptainer), podman, gitpod, shifter and charliecloud. Currently only usage with Singularity has been fully tested, (Docker and Apptainer have only been partially tested) but support for each of the container services exists.

Note

Singularity was adopted by the Linux Foundation and is now called Apptainer. Singularity still exists, but it is likely newer installs will use Apptainer.

Docker or Singularity?

Docker or Singularity (Apptainer) Docker requires root privileges which can can make it a hassle to install on computing clusters (there are work arounds). Apptainer/Singularity does not, so running the pipeline using Apptainer/Singularity is the recommended method for running the pipeline.

Step 3: Install dependencies

Besides the Nextflow run time (requires Java), and container engine the dependencies required by mikrokondo are fairly minimal requiring only Python 3.10 (more recent Python versions will work as well) to run.

Dependencies listed

  • Python (3.10>=)
  • Nextflow (22.10.1>=)
  • Container service (Docker, Singularity, Apptainer have been tested)
  • The source code: git clone https://github.com/phac-nml/mikrokondo.git

Step 4: Further resources to download

  • GTDB Mash Sketch: required for speciation and determination if sample is metagenomic
  • Decontamination Index: Required for decontamination of reads (it is simply a minimap2 index)
  • Kraken2 nt database: Required for binning of metagenommic data and is an alternative to using Mash for speciation
  • Bakta database: Running Bakta is optional and there is a light database option, however the full one is recommended. You will have to unzip and un-tar the database for usage. You can skip running Bakta however making the requirement of downloading this database optional.
  • StarAMR database: Running StarAMR is optional and requires downloading the StarAMR databases. Also if you wish to avoid downloading the database, the container for StarAMR has a database included which mikrokondo will default to using if one is not specified making this requirement optional.

Configuration and settings:

The above downloadable resources must be updated in the following places in your nextflow.config. The spots to update in the params section of the nextflow.config are listed below:

// Bakta db path, note the quotation marks
bakta {
    db = "/PATH/TO/BAKTA/DB"
}

// Decontamination minimap2 index, note the quotation marks
r_contaminants {
    mega_mm2_idx = "/PATH/TO/DECONTAMINATION/INDEX"
}

// kraken db path, not the quotation marks
kraken {
    db = "/PATH/TO/KRAKEN/DATABASE/"
}

// GTDB Mash sketch, note the quotation marks
mash {
    mash_sketch = "/PATH/TO/MASH/SKETCH/"
}

// STARAMR database path, note the quotation marks
// Passing in a StarAMR database is optional if one is not specified the database in the container will be used. You can just leave the db option as null if you do not wish to pass one
staramr {
  db = "/PATH/TO/STARMAR/DB"
}

Getting Started

Usage

nextflow run main.nf --input PATH_TO_SAMPLE_SHEET --outdir OUTPUT_DIR --platform SEQUENCING_PLATFORM -profile CONTAINER_TYPE

Please check out the documentation for complete usage instructions here: docs

Under the usage section you can find example commands, instructions for configuration and a reference to a utility script to reduce command line bloat!

Data Input/formats

Mikrokondo requires two things as input:

  1. Sample files - fastq and fasta must be in gzip format
  2. Sample sheet - this FOFN (file of file names) contains sample names and allows user to combine read-sets. The following header fields are accepted:
    • sample
    • fastq_1
    • fastq_2
    • long_reads
    • assembly

For more information see the useage docs.

Output/Results

All output files will be written into the outdir (specified by the user). More explicit tool results can be found in both the Workflow and Subworkflow sections of the docs. Here is a brief description of the outdir structure (though in brief the further into the structure you head, the further in the workflow the tool has been run):

  • Assembly - contains all output files generated as a result of read assembly and tools using assembled contigs as input
    • Annotation - contains output files generated from tools applying annotation and/or gene characterization from assembled contigs
    • Assembling - contains output files generated as a part of the assembly process in nested order
    • FinalAssembly - this directory will always contain the final output contig files from the last step in the assembly process (will take into account any skip flags in the process)
    • PostProcessing - contains output files from intermediary tools that run after assembly but before annotation takes place in the workflow
    • Quality - contains all output files generated as a result of quality tools after assembly
  • Subtyping - contains all output files from workflow subtyping tools, based off assembled contigs
  • FinalReports - contains assorted reports including aggregated and flat reports
  • pipeline_info - includes tool versions and other pipeline specific information
  • Reads - contains all output files generated as a result of read processing and tools using reads as input
    • FinalReads - this directory will contain the final output read files from the last step in read processing (taking into account any skip flags used in the run)
    • Processing - contains output files from tools run to process reads in nested order
    • Quality - contains all output files generated from read quality tools

Run example data

Three test profile with example data are provided and can be run like so:

  • Assembly test profile: nextflow run main.nf -profile test_assembly,<docker/singularity> --outdir <OUTDIR>
  • Illumina test profile: nextflow run main.nf -profile test_illumina,<docker/singularity> --outdir <OUTDIR>
  • Nanopore test profile: nextflow run main.nf -profile test_nanopore,<docker/singularity> --outdir <OUTDIR>
  • Pacbio test profile: nextflow run main.nf -profile test_pacbio,<docker/singularity> --outdir <OUTDIR>
    • The pacbio workflow has only been partially tested as it fails at Flye due to not enough reads being present.

Testing

Integration tests are implemented using nf-test. In order to run tests locally, please do the following:

Install nf-test

# Only need to install package nf-test. Below is only for
# if you want to have nextflow and nf-test in a separate environment
conda create --name nextflow-testing nextflow nf-test
conda activate nextflow-testing

Run tests

# From mikrokondo root directory
nf-test test

Add --profile singularity to switch from using docker by default to using singularity.

Troubleshooting and FAQs:

Within release 0.1.0, Bakta is currently skipped however it can be enabled from the command line or within the nextflow.config (please check the docs for more information). It has been disabled by default due issues in using the latest bakta database releases due to an issue with amr_finder there are fixes available and older databases still work however they have not been tested. A user can still enable Bakta themselves or fix the database. More information is provided here: oschwengers/bakta#268

For a list of common issues or errors and their solutions, please read our FAQ section.

References

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

Legal and Compliance Information:

Copyright Government of Canada 2023

Written by: National Microbiology Laboratory, Public Health Agency of Canada

This pipeline uses code and infrastructure developed and maintained by the nf-core community, reused here under the MIT license.

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.

Updates and Release Notes: