Skip to content

tnagler/quickpool

Repository files navigation

quickpool

build status Codacy Badge codecov License: MIT Documentation DOI

Fast and easy parallel computing in C++11

Why quickpool?

Developer friendly

The library consists of a single header file with permissive license. It requires only C++11 and is otherwise self-contained. Just drop quickpool.hpp in your project folder and enjoy.

User friendly API

Loops can be nested, see the examples below. All functions dispatch to a global thread pool instantiated only once with as many threads as there are cores. Optionally, one can create a local ThreadPool exposing the functions above. See also the API documentation.

Cutting edge algorithms

All scheduling uses work stealing synchronized by cache-aligned atomic operations.

The thread pool assigns each worker thread a task queue. The workers process first their own queue and then steal work from others. The algorithm is lock-free in the standard case where only a single thread pushes work to the pool.

Parallel loops assign each worker part of the loop range. When a worker completes its own range, it steals half the range of another worker. This perfectly balances the load and only requires a logarithmic number of steals (= points of contention). The algorithm uses double-wide compare-and-swap, which is lock-free on most modern processor architectures.

Examples

Thread pool

push([] { /* some work */ });
wait(); // wait for current jobs to finish

auto f = async([] { return 1 + 1; }); // get future for result
// do something else ...
auto result = f.get(); // waits until done and returns result

Both push() and async() can be called with extra arguments passed to the function.

auto work = [] (const std::string& title, int i) { 
  std::cout << title << ": " << i << std::endl; 
};
push(work, "first title", 5);
async(work, "other title", 99);
wait();

Parallel loops

Existing sequential loops are easy to parallelize:

std::vector<double> x(10, 1);

// sequential version
for (int i = 0; i < x.size(); ++i) 
  x[i] *= 2;

// parallel version
parallel_for(0, x.size(), [&] (int i) { x[i] *= 2; };


// sequential version
for (auto& xx : x) 
  xx *= 2;

// parallel version
parallel_for_each(x, [] (double& xx) { xx *= 2; };

The loop functions automatically wait for all jobs to finish, but only when called from the main thread.

Nested parallel loops

It is possible to nest parallel for loops, provided that we don't need to wait for inner loops.

std::vector<double> x(10, 1);

// sequential version
for (int i = 0; i < x.size(); ++i) {
  for (int j = 4; j < 9; j++) {
    x[i] *= j;
  }
}

// parallel version
parallel_for(0, x.size(), [&] (int i) { 
  // *important*: capture i by copy
  parallel_for(4, 9, [&, i] (int j) {  x[i] *= j; }); // doesn't wait
}; // does wait

Local thread pool

A ThreadPool can be set up manually, with an arbitrary number of threads. When the pool goes out of scope, all threads joined.

ThreadPool pool(2); // thread pool with two threads

pool.push([] {});
pool.async([] {});
pool.wait();

pool.parallel_for(2, 5, [&] (int i) {});
auto x = std::vector<double>{10};
pool.parallel_for_each(x, [] (double& xx) {});