Skip to content
This repository has been archived by the owner on Feb 2, 2024. It is now read-only.

WIP: interface for map-reduce style kernels #284

Open
wants to merge 11 commits into
base: master
Choose a base branch
from

Conversation

Hardcode84
Copy link
Contributor

This PR adds new APIs to be used by pandas functions implementers to help parallelize theirs kernels:

  • map_reduce(arg, init_val, map_func, reduce_func)
  • map_reduce_chunked(arg, init_val, map_func, reduce_func)

Parameters:

  • arg - list-like object (it can be python list, numpy array or any other object with similar interface)
  • init_val - initial value
  • map_func - map function which will be applied to each element/elements range in parallel (on different processes of on different nodes)
  • reduce_func - reduction function to combine initial value and results from different processes/nodes

The difference between these two functions:

  • map_reduce will apply map function to each element in range (map function must take single element and return single element) and then apply reduce function pairwise (reduce function must take two elements and return single element)
  • map_reduce_chunked will apply map function to range of elements, belonging to current thread/node (map function must take range of elements as paramenter and return list/array as result) and then apply reduce to entire ranges (reduce function must take two ranges as parameters and return list/array)

You can also call any of these functions from inside map or reduce func to support nested parallelism.

These functions usable for both thread/mpi parallelism.

If you call them from numba @njit function they will be parallelized by numba buiilt-in parallelisation machinery.

If you call them from @hpat.jit they will be distributed by hpat parallelisation pass (doesn't work currently)

Wrote parallel series sorting (numpy.sort + hand-written merge) as example.

Current issues:

  • Thread parallel sort isn't working due to numba issue Invalid result with parfor numba/numba#4806
  • MPI parallelisation doesn't work entirely (lot of issues, bigger one is that hpat support only very limited list of built-in functions (sum, mult, min, max) for parfor reductions)
  • Parallel sort handles NaNs differently from numpy.sort, need to fix
  • Threads/nodes count in map_reduce_chunked handcode as 4, will fix
  • Proper documentation

The second part of this PR is distribution depth knob to (not-so)fine-tune nested parallelism between distribution and threading:

  • New environment variable SDC_DISTRIBUTION_DEPTH controls how much nested parallel loops will be distributed by DistributionPass
  • Distributed loops are any of newly introduced map_reduce* functions or manually written prange loops.
  • Default value is 1 which means that only the most outer loop will be distributed by mpi, then next loop will parallelised by numba, and then all deeper loops will be executed sequentually (as numba doesn't support nested parallelisation)
  • Set SDC_DISTRIBUTION_DEPTH to 0 to disable distribution.
# SDC_DISTRIBUTION_DEPTH=1
for i in prange(I) # distributed by DistributedPass
    for j in prange(J) # parallelised by numba
        for k in prange(K) # executed sequentually

@pep8speaks
Copy link

pep8speaks commented Nov 12, 2019

Hello @Hardcode84! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2019-11-12 11:16:09 UTC

a = len(l) // n
b = a + 1
c = len(l) % n
return [l[i * b: i * b + b] if i < c else l[c * b + (i - c) * a: c * b + (i - c) * a + a] for i in range(n)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is quite understandable code, isn't it? :-)
please don't call variables by single letter

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants