Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Group by upload: use repartition to increase parallelism #601

Open
wants to merge 11 commits into
base: main
Choose a base branch
from
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@ class ChrononKryoRegistrator extends KryoRegistrator {
"org.apache.spark.sql.types.BooleanType$",
"org.apache.spark.sql.types.BinaryType$",
"org.apache.spark.sql.types.DateType$",
"org.apache.spark.sql.types.ArrayType",
"org.apache.spark.sql.types.TimestampType$",
"org.apache.spark.util.sketch.BitArray",
"org.apache.spark.util.sketch.BloomFilterImpl",
Expand Down
33 changes: 26 additions & 7 deletions spark/src/main/scala/ai/chronon/spark/GroupByUpload.scala
Original file line number Diff line number Diff line change
@@ -1,6 +1,12 @@
package ai.chronon.spark

import ai.chronon.aggregator.windowing.{FinalBatchIr, FiveMinuteResolution, Resolution, SawtoothOnlineAggregator}
import ai.chronon.aggregator.windowing.{
BatchIr,
FinalBatchIr,
FiveMinuteResolution,
Resolution,
SawtoothOnlineAggregator
}
import ai.chronon.api
import ai.chronon.api.{Accuracy, Constants, DataModel, GroupByServingInfo, QueryUtils, ThriftJsonCodec}
import ai.chronon.api.Extensions.{GroupByOps, MetadataOps, SourceOps}
Expand Down Expand Up @@ -60,11 +66,25 @@ class GroupByUpload(endPartition: String, groupBy: GroupBy) extends Serializable
.serialize(sawtoothOnlineAggregator.init)
.capacity()}
|""".stripMargin)
val outputRdd = groupBy.inputDf.rdd
.keyBy(keyBuilder)
.mapValues(SparkConversions.toChrononRow(_, groupBy.tsIndex))
.aggregateByKey(sawtoothOnlineAggregator.init)( // shuffle point
seqOp = sawtoothOnlineAggregator.update, combOp = sawtoothOnlineAggregator.merge)

def seqOp(batchIr: BatchIr, row: Row): BatchIr = {
sawtoothOnlineAggregator.update(batchIr, SparkConversions.toChrononRow(row, groupBy.tsIndex))
}

val parallelism = sparkSession.sparkContext.getConf.getInt("spark.default.parallelism", 1000)
val inputPartition = groupBy.inputDf.rdd.getNumPartitions
val keyedInputRdd = groupBy.inputDf.rdd.keyBy(keyBuilder)
// shuffle point: the input rdd has less number of partitions due to compact size
// when rows are converted to chronon rows, the size increases
// so we repartition it to reduce memory overhead and improve performance
val keyedInputRddRepartitioned = if (inputPartition < (parallelism / 10)) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need to make this 10 configurable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah we can make it configurable

keyedInputRdd
.repartition(parallelism)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs to be configurable (OPT_IN) before merging - we are going to add a shuffle step to ALL the upload jobs.

By default it should be opt-out

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. Let me make it configurable.

} else {
keyedInputRdd
}
val outputRdd = keyedInputRddRepartitioned
.aggregateByKey(sawtoothOnlineAggregator.init)(seqOp = seqOp, combOp = sawtoothOnlineAggregator.merge)
.mapValues(sawtoothOnlineAggregator.normalizeBatchIr)
.map {
case (keyWithHash: KeyWithHash, finalBatchIr: FinalBatchIr) =>
Expand All @@ -75,7 +95,6 @@ class GroupByUpload(endPartition: String, groupBy: GroupBy) extends Serializable
}
KvRdd(outputRdd, groupBy.keySchema, irSchema)
}

}

object GroupByUpload {
Expand Down