Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Group by upload: use repartition to increase parallelism #601

Open
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

better365
Copy link
Contributor

Summary

The group by upload input rdd has less number of partitions with compact size. It can leads to executor OOM while converting to chronon row.

Use the default parallelism to improve scalability.

Tested with Relevance team's upload job. The running time got reduced from 40+ mins to less than 15mins.

The downside is that repartition will trigger a shuffle.

Why / Goal

Improve performance.

Test Plan

  • Added Unit Tests
  • Covered by existing CI
  • Integration tested

Checklist

  • Documentation update

Reviewers

@nikhilsimha @hzding621

@@ -190,6 +190,8 @@ abstract class JoinBase(joinConf: api.Join,
// all lazy vals - so evaluated only when needed by each case.
lazy val partitionRangeGroupBy = genGroupBy(unfilledRange)

println(s"debug count ${partitionRangeGroupBy.inputDf.count()}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: remove this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah good catch

// shuffle point: the input rdd has less number of partitions due to compact size
// when rows are converted to chronon rows, the size increases
// so we repartition it to reduce memory overhead and improve performance
val keyedInputRddRepartitioned = if (inputPartition < (parallelism / 10)) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need to make this 10 configurable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah we can make it configurable

Signed-off-by: Pengyu Hou <3771747+better365@users.noreply.github.com>
Copy link
Contributor

@nikhilsimha nikhilsimha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should not make this default behavior

// so we repartition it to reduce memory overhead and improve performance
val keyedInputRddRepartitioned = if (inputPartition < (parallelism / 10)) {
keyedInputRdd
.repartition(parallelism)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs to be configurable (OPT_IN) before merging - we are going to add a shuffle step to ALL the upload jobs.

By default it should be opt-out

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. Let me make it configurable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants