You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This test/debug tool was recently added in #7444 .
This ticket tracks a limitation in the tool when used for sharded tenants which are being written to.
It works well enough for shards in that it fetches all the data for all the shards, and one can start up a pageserver. However, because shards advance their disk_consistent_lsn independently, trying to run an endpoint against the downloaded data has a couple of problems:
Shard zero will serve a basebackup by default at whatever its latest LSN is, but other shards may not have seen that LSN
If we hack the metadata for tenants to all have the same disk_consistent_lsn, one would still end up with an un-writable tenant, as a compute trying to write from the basebackup lsn would end up writing safekeeper data that some shards wouldn't ingest because they'd already seen a higher lsn.
To solve this, we probably need to make the tenant-import command smart enough to trim back imported data to a specific lsn (the lowest disk_consistent_lsn of the shards), including trimming layer files. This could either be done in the scrubber or as a pageserver API (perhaps as part of the tenant-import flow).
The text was updated successfully, but these errors were encountered:
This test/debug tool was recently added in #7444 .
This ticket tracks a limitation in the tool when used for sharded tenants which are being written to.
It works well enough for shards in that it fetches all the data for all the shards, and one can start up a pageserver. However, because shards advance their disk_consistent_lsn independently, trying to run an endpoint against the downloaded data has a couple of problems:
To solve this, we probably need to make the
tenant-import
command smart enough to trim back imported data to a specific lsn (the lowest disk_consistent_lsn of the shards), including trimming layer files. This could either be done in the scrubber or as a pageserver API (perhaps as part of thetenant-import
flow).The text was updated successfully, but these errors were encountered: