Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(cass): Consolidate to one partkey table instead of table per shard #1536

Merged
merged 3 commits into from
May 12, 2023

Conversation

vishramachandran
Copy link
Member

@vishramachandran vishramachandran commented Mar 20, 2023

Pull Request checklist

  • The commit(s) message(s) follows the contribution guidelines ?
  • Tests for the changes have been added (for bug fixes / features) ?
  • Docs have been added / updated (for bug fixes / features) ?

Today, we have one table per shard for storing part keys.
This (work-in-progress) PR explores storing all part keys in one table.
This is to reduce Cassandra compaction cost incurred due to numerous tables.

All features hidden behind feature flag.

These are the read/write path considerations in the design of Cassandra schema

  • should be able to load contents of shard efficiently for index bootstrap
  • should be able to read/write single part key efficiently for regular ingestion operations
  • Should be able to do cardinality busting
  • Should be able to do index migration for downsampling
  • Should be able to repair across DCs

TODOs for later:

  • Performance Testing
  • Data migration plan

@vishramachandran vishramachandran self-assigned this Mar 20, 2023
@vishramachandran vishramachandran marked this pull request as draft March 20, 2023 17:42
@vishramachandran vishramachandran marked this pull request as ready for review March 28, 2023 18:50
@vishramachandran vishramachandran changed the title [WIP] feat(cass): Consolidate to one partkey table instead of table per shard feat(cass): Consolidate to one partkey table instead of table per shard Mar 28, 2023
alextheimer
alextheimer previously approved these changes May 8, 2023
Copy link
Contributor

@alextheimer alextheimer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's some copy-paste going on here, but I don't think there's any better way to approach this frequent if-else-with-similar-blocks logic.

  • We could write generalized, complex-parameter functions to prevent copy-paste, but that would complicate readability.
  • We could write classes to handle the if/else statements via dynamic dispatch, but that's too much code/complexity for a flag we intend to keep temporary.

This looks good 👍

Copy link
Contributor

@amolnayak311 amolnayak311 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the changes and apologies for taking long to review. It seems you need to rebase and some changes around read consistency will be needed. I am happy approve once the changes are pushed as approval will be needed anyways once a new commit is made..

part-keys-v2-table-enabled = false

# Number of buckets used in partKeyv2 cass tables (to control wide rows problem)
pk-v2-table-num-buckets = 5000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a good default or needs tuning?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add the math

val chunkTable = getOrCreateChunkTable(dataset)
val partitionKeysByUpdateTimeTable = getOrCreatePartitionKeysByUpdateTimeTable(dataset)
if (createTablesEnabled) {
val partitionKeysV2Table = getOrCreatePartitionKeysV2Table(dataset)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't check feature flag whether to or not create the table?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that getOrCreatePartitionKeysV2Table does not actually create the table in cassandra. It creates the PartitionKeysV2Table object. The initialize call will create the tables, and is lazy.

I did not add the guards for v1 tables because we did not narrow down on migration strategy. We may choose to migrate with co-existing tables. That is an option on the table.

Nevertheless, it is likely we may not choose the co-existing tables strategy for migration, so I will disable the table object initialization anyway.

@@ -78,17 +82,26 @@ extends ColumnStore with CassandraChunkSource with StrictLogging {
MeasurementUnit.time.milliseconds).withoutTags()

def initialize(dataset: DatasetRef, numShards: Int): Future[Response] = {
if (createTablesEnabled) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If createTablesEnabled is disabled and even with partKeysV2TableEnabled, the else block will end up creating the part key per shard tables.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, table creation happens only on intialize() call, not upon creating the handle object.

)
val updateHour = System.currentTimeMillis() / 1000 / 60 / 60
Await.result(
target.writePartKeys(datasetRef, shard = 0 /* not used */,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it important we call out that we are making an assumption that source are target are using both v1 or v2?

val split = (pk.hash.get & Int.MaxValue) % pkByUTNumSplits
val writePkFut = pkTable.writePartKey(pk, ttl).flatMap {
val split = PartKeyRecord.getBucket(pk.partKey, schemas, pkByUTNumSplits)
val bucket = PartKeyRecord.getBucket(pk.partKey, schemas, pkv2NumBuckets)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This bucket can go inside the if condition below. However
val pkTable = getOrCreatePartitionKeysTable(ref, shard) at the beginning of the method should go inside else, otherwise we will create the v1 table even with v2 enabled.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See other comments, but done anyway

val startTime = // compare and get the oldest start time
if (sourceRec.startTime < targetRec.startTime) sourceRec.startTime else targetRec.startTime
val endTime = // compare and get the latest end time
if (sourceRec.endTime > targetRec.endTime) sourceRec.endTime else targetRec.endTime
PartKeyRecord(sourceRec.partKey, startTime, endTime, None)
PartKeyRecord(sourceRec.partKey, startTime, endTime, sourceRec.shard)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a possibility for source and target shard be different in case the spread changes?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spread changes should not be clubbed with migration or chunk repair. Anyway, this is out of scope for this PR.

}
}

def getOrCreatePartitionKeysV2Table(dataset: DatasetRef): PartitionKeysV2Table = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think it's a good idea to put require(!partKeysV2TableEnabled) this in getOrCreatePartitionKeys

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Table creation will not happen, but added anyway to catch any issues.


sealed class PartitionKeysV2Table(val dataset: DatasetRef,
val connector: FiloCassandraConnector,
writeConsistencyLevel: ConsistencyLevel)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will have to introduce read consistency similar to what got introduced in PR #1573

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes - This is done now after rebase.

object PartKeyRecord {
def getBucket(partKey: Array[Byte], schemas: Schemas, numBuckets: Int): Int = {
val hash = schemas.part.binSchema.partitionHash(partKey, UnsafeUtils.arayOffset)
(hash & Int.MaxValue) % numBuckets
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit pick, Perhaps an assert on numBuckets to be non zero?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is unlikely to happen since we have default configs. Even if it happens, an IllegalArg is not any better than DivideByZero exception. I'll leave it as is.

@@ -48,7 +48,7 @@ class RocksDbCardinalityStoreMemoryCapSpec extends AnyFunSpec with Matchers {
val timePerIncrementMicroSecs = (end-start) / numTimeSeries / 1000
println(s"Was able to increment $numTimeSeries time series, $timePerIncrementMicroSecs" +
s"us each increment total of ${totalTimeSecs}s")
timePerIncrementMicroSecs should be < 200L
timePerIncrementMicroSecs should be < 300L
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a big fan of putting assertions on runtimes in Spec these may lead to flaky tests based on the different platform.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do not disagree. Looks like this fix is already in. I removed my change.

Copy link
Contributor

@amolnayak311 amolnayak311 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the changes and apologies for taking long to review. It seems you need to rebase and some changes around read consistency will be needed. I am happy approve once the changes are pushed as approval will be needed anyways once a new commit is made..

@whizkido
Copy link
Contributor

Finally !!!!!

Copy link
Contributor

@amolnayak311 amolnayak311 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @vishramachandran I missed the getOrCreate object vs create table. I think it's good to go. Thanks for the quick turnaround

@vishramachandran vishramachandran merged commit 7eab1a8 into filodb:develop May 12, 2023
1 check passed
@vishramachandran vishramachandran deleted the one-partkey-table branch May 12, 2023 16:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants