Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

RealImage/ranger.old

 
 

Repository files navigation

RANGER

Go Reference

Download large files in parallel chunks in Go.

Why?

Current Go HTTP clients download files as a stream of bytes, usually with a buffer. If you consider every file an array of bytes, this means that when you initiate a download a connection is opened, and you receive an io.Reader. As you Read bytes off this Reader, more bytes are loaded up into an internal buffer (an in-memory byte array that stores a certain amount of data in the expectation that you'll read it soon). As you keep reading data, the HTTP client will fill the buffer up as fast as it can from the server.

So? Why is this a problem?

Most of the time this is what we want and need. But when we're downloading large files (say from Amazon S3 or CloudFront, or any other file-server) this is usually not optimal. These services have per-connection speed limits on the bytes going out, and if you're downloading a very large file (say over 25 GB) you're also not likely to be using the caches. This means that the number of bytes coming in per second (bandwidth) is usually lower than what your connection actually supports.

What does Ranger do?

Ranger does two orthogonal things to speed up transfers — one, it downloads files in chunks: so if there are 1000 bytes, for example, it can download the file in chunks of 100 bytes, by requesting byte range 0-99, 100-199, 200-299 and so on using an HTTP RANGE GET. This allows the service to cache each chunk, because even if the total file size is too large to cache, each chunk is still small enough to fit in. See the CloudFront Developer Guide for more information.

Two, it downloads upcoming chunks in parallel, so if the parallelism count is set at 3, in the example above it would download byte ranges 0-99, 100-199 and 200-299 in parallel, even while the first range is being Read. It will also start downloading the fourth range after the first one is read, and so on. This allows trading RAM for speed - deciding to dedicate 3 x 100 bytes of memory allows downloads to go on that much faster. In practice, 8MB to 16MB is a good chunk size, especially if that lines up with the multipart upload boundaries in a system like S3. See the S3 Developer Guide for more information.

Usage & Integration

The lowest-level usage is to create a new Ranger with chunk size and parallelism, and a fetcher function passed in. When the Ranger is invoked with a file length, it calls the fetch function with the byte range, in parallel, and collects and orders the resulting chunk Readers. This is a low level API that you can use if you have a custom protocol to fetch data.

For regular use, RangingHTTPClient uses a given http.Client to fetch chunks as configured. RangingHTTPClient also exposes a standard http.Client via the RangingHTTPClient.StandardClient method. The returned client will fetch chunk ranges using the RangingHTTPClient.

This means that Ranger integrates well on both sides - Grab and other download managers can use a ranging client via a standard http.Client, while wrapping other HTTPClients that provide automatic retry, etc like Heimdall or go-retryablehttp.

About

Download large files in parallel chunks in Go.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Go 100.0%