Skip to content

mbhall88/compression_benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FASTQ compression benchmark

DOI

Benchmarking fastq compression with generic (mature) compression algorithms

Motivation

This behcmark is motivated by a recent question from Ryan Connor on the µbioinfo Slack group

my impression is that bioinformatics really likes gzip (and only gzip?), but that there are other generic compression algs that are better (for bioinfo data types); assuming you agree (if not, why not?), why haven't the others compression types caught on in bioinformatics?

It kicked off an interesting discussion, which led me to dig into the literature and see what I could find. I'm sure I could search deeper and for longer, but I really couldn't find any benchmarks that satisfied me. Don't get me wrong, there are plenty of benchmarks, but they're always looking at bioinformatics-specific tools for compressing sequencing data. Sure, these perform well, but every repository I went to was untouched in a while. When archiving data, the last thing I want is to try and decompress my data and the tool no longer installs/works on my system. In addition, I want the tool to be ubiquitous and mature. I know this is a lot of constraints, but hey, that's what I am interested in.

This benchmark only covers ubiquitous/mature/generic compression tools

Methods

Tools

The tools tested in this benchmark are:

These tools were used as they were the main ones that popped up in our discussion. Feel free to raise an issue on this repository if you would like to see another tool included.

All compression level settings were tested for each tool and default settings were used for all other options.

Data

The data used to test each tool are fastqs:

Nanopore

Illumina

Note: I couldn't find sources for all of these samples. If you can fill in some of the gaps, please raise an issue and I will gladly update the sources.

All data were downloaded with fastq-dl (v2.0.1). Paired Illumina data were combined into a single fastq file.

Results

Compression ratio

The first question is how much smaller does each compression tool make a fastq file. As this also depends on the compression level selected, all possible levels were tested for each tool (the default being indicated with a red circle).

The compression ratio is a percentage of the original file size - i.e., $\frac{\text{compressed size}}{\text{uncompressed size}}$.


Compression ratio figure

Figure 1: Compression ratio (y-axis) for different compression tools and levels. Compression ratio is a percentage of the original file size. The red circles indicate the default compression level for each tool. Illumina data is represented with a solid line and circular points, whereas Nanopore data is a dashed line with cross points. Translucent error bands represent the 95% confidence interval.


The most striking result here is the noticeable different in compression ratio between Illumina and Nanopore data - regardless of the compression tool used. (If anyone can suggest a reason for this, please raise an issue.)

Update 07/06/2023: Peter Menzel mentioned this is likely due to the noisier quality scores in the Nanopore data. Illumina quality scores are generally quite homogenous, which increases compressability.

Using default settings, zstd and gzip provide similar ratios, as do brotli, xz and bzip2 (however, compression level doesn't seem to actually change the ratio for bzip2). When using the highest compression level xz provides the best compression (however, this comes at a cost to runtime as we'll see below) - slightly better than brotli, which is a close second.

(De)compression rate and memory usage

In many scenarios, the (de)compression rate is just as important as the compression ratio. However, if compressing for archival purposes, rate is probably not as important.

The compression rate is $\frac{\text{uncompressed size}}{\text{(de)compression time ( secs)}}$.


Compression rate figure

Figure 2: Compression (left column) and decompression (right column) rate (top row) and peak memory usage (lower row). Note the log scale for rate. The red circles indicate the default compression level for each tool. Illumina data is represented with a solid line and circular points, whereas Nanopore data is a dashed line with cross points. Translucent error bands represent the 95% confidence interval.


As alluded to earlier, xz and brotli pay for their fantastic compression ratios by being orders-of-magnitude slower than the other tools at compressing (using the default compression level). brotli also uses more memory than the other tools when compressing at its default level - although in absolute terms, the highest memory usage is still well below 1GB (for xz at the highest compression level).

The main take-away from Figure 2 is that zstd (de)compresses much faster than the other tools (using the default level). Compression level seems to have a big impact in compression rate (except for bzip2), however, not so much for decompression.

Rate vs. Ratio

Cornelius Roemer suggested plotting rate against ratio in order to get a Pareto Frontier. These are good plots to get a quick sense of which algorithms are best suited to a specific use case. The lower right corner is the "magic zone" where an algorithm has high rate and ratio. In Figure 3 we see that the compression version of this plot is a little messy as the compression rate it quite variable. However, gzip and zstd do tend to have more points on the lower-ish right, with a spattering of brotli points - though there are also a number of brotli points on the left. The decompression plot is a lot clearer and we get nice "fronts". From this it is clear that zstd and brotli give fast decompression even with good compression ratios.

Pareto frontier figure

Figure 3: Compression (top row) and decompression (lower row) rate (x-axis) and peak memory usage (lower row). Note the log scale for rate. Illumina data is represented with circular points and Nanopore data with cross points.

Conclusion

So what tool to use? As most often with benchmarks: it depends on your situation.

If all you care about is compressing your data as small as it will go ,and you don't mind how long it takes, then xz (compression level 9) or brotli (level 11 - default) - are the obvious choices. However, if you're planning on a really good one-off compression, but expect decrompressing regularly, brotli is probably the better option.

If you want fast (de)compression, then zstd is the best option - using default options. Though a special mention should also go to brotli for decompression rates.

If, like most people, you're contemplating replacing gzip (default options), the siutation is a little less clear. As a drop-in replacement, zstd (default options) will give you about the same compression ratio with ~10-fold faster compression and ~3-5-fold faster decompression. Another option is bzip2, which will give you ~1.2-fold better compression ratios than gzip ( and zstd) with a comparable compression rate to gzip. However, bzip2's decompression rate is ~5-fold slower than gzip.

One final consideration is APIs for various programming languages. If it is difficult to read/write files that are compressed with a given algorithm, then using that compression type might cause problems. Most (good) bioinformatics tools support gzip-compressed input and output. However, support for other compression types shouldn't be too much work for most software tool developers provided a well-maintained and documented API is available in the relevant programming language. Here is a list of APIs for the tested compression tools in a selection of programming languages with an arbitrary grading system for how "stable" I think they are (feel free to put in a pull request if you want to contribute other languages).

gzip bzip2 xz zstd brotli
Python A A A B+ A
Rust A B+ B+ B B+
C/C++ A A A A A
Julia A A A A NA
Go A A B B A
  • A: standard library (i.e. builtin) or library is maintained by the original developer (note: Rust's gzip library is maintained by rust-lang itself)
  • B: external library that is actively maintained, well-documented, and has quick response times