New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Live migration docs #2925
Conversation
Allow 10 minutes from last push for the staging site to build. If the link doesn't work, try using incognito mode instead. For internal reviewers, check web-documentation repo actions for staging build status. Link to build for this PR: http://docs-dev.timescale.com/docs-vineeth-nits-live-migration |
b8d3b46
to
efdb20d
Compare
1. With downtime: Stop database operations from your application, which will result in downtime. This allows the live migration to catch up on the lag between the source and target databases, enabling the validation checks to be performed. The downtime will last until the lag is eliminated and the data integrity checks are completed. | ||
2. Without downtime: Since the difference between the source and target databases is less than 130 MB, you can perform data integrity checks, excluding the latest data that is still being written. This approach does not require taking your application down. | ||
|
||
Now that the data integrity checks are complete, it's time to switch your target database to become the primary one. If you have selected option 2 for data integrity checks, stop writing to the source database and immediately start writing to the target database from the application. This will minimize application downtime to as low as application restart. It allows the live migration process to complete replicating data to the target database, as the source will no longer receive any new transactions. You will know the process is complete when the replication lag reduces to 0 megabytes. If you have chosen option 1 for data integrity checks, start your application to write data to the target database. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you have selected option 2 for data integrity checks, stop writing to the source database and immediately start writing to the target database from the application
This is bit confusing. When you select option 2, you have to wait till the lag to become 0 and then change the application to switch to the target.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that not mandatory? If the lag is, let's say, 5 minutes, the application can start writes to target right away and the 5-minute lag catches up in parallel as new writes end up in the target. The only limitation is that during the switchover, they cannot access the tail part of the data as it is in flight (catching up with the lag). By doing this, they wouldn't lose any write data or create gaps in their timeseries data. However, for a moment, until the lag catches up, the in-flight data cannot be accessed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would break the transactional consistency & not recommended. Additionally, you will miss update/deletes targeting the non migrated data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, lets mention both as choices i.e. if user wants 100% transactional consistency he has to trade off taking database down for sometime, if the user wants zero downtime during cutover he has to trade off transactional consistency if any updates/deletes are happening to latest data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rather refrain from suggesting anything causes transactional inconsistencies. IMHO, the whole point of migration with logical decoding is "transactional consistency", we shouldn't break that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should not make this change. It complicates the matter. Let's have just 1 way to confirm data integrity and be 100% confident that all the data in my source/production database is migrated to Timescale.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The import line (Line 1) of this file can now be removed.
@@ -36,19 +36,19 @@ module.exports = [ | |||
excerpt: "Migrate a large database with low downtime", | |||
children: [ | |||
{ | |||
title: "Live migration from PostgreSQL", | |||
title: "From PostgreSQL", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also apply this change to the dual-write and backfill pages
@@ -38,7 +37,7 @@ For more information, refer to the step-by-step migration guide: | |||
- [Live migration from PostgreSQL][from-postgres] | |||
- [Live migration from TimescaleDB][from-timescaledb] | |||
|
|||
If you want to manually migrate data from PostgreSQL, refer to | |||
If you want to have more control over migration and prefer to manually migrate data from PostgreSQL, refer to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to have more control over migration and prefer to manually migrate data from PostgreSQL, refer to | |
If you want to have more control over the migration and prefer to manually migrate data from PostgreSQL, refer to |
(100 GB-10 TB+) with low downtime (on the order of few minutes). It requires | ||
more steps to execute than a migration with downtime using [pg_dump/restore][pg-dump-and-restore], | ||
but supports more use-cases and has less requirements than the [dual-write and backfill] method. | ||
Live migration is a strategy used to move a large amount of data (100 GB-10 TB+) with minimal downtime (typically a few minutes). It involves copying existing data from the source to the target and supports change data capture to stream ongoing changes from the source during the migration process. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Live migration is a strategy used to move a large amount of data (100 GB-10 TB+) with minimal downtime (typically a few minutes). It involves copying existing data from the source to the target and supports change data capture to stream ongoing changes from the source during the migration process. | |
Live migration is a strategy to move a large amount of data | |
(100 GB-10 TB+) with minimal downtime (typically a few minutes). It | |
achieves low downtime by simultaneously 1) copying existing data from the | |
source database to the target database and 2) recording ongoing changes which | |
are made on the source. When the initial data copy completes, it continuously | |
applies the recorded transactions to the target database until the target | |
database is fully caught up with the source database. At this point the | |
application's database connection is switched to the target database (which may | |
result in a short downtime), and the migration is complete. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think 1)
& 2)
is not normal in docs.
When the initial data copy completes, it continuously
applies the recorded transactions
When the initial data copy completes, live migration applies the recorded transactions ...
but supports more use-cases and has less requirements than the [dual-write and backfill] method. | ||
Live migration is a strategy used to move a large amount of data (100 GB-10 TB+) with minimal downtime (typically a few minutes). It involves copying existing data from the source to the target and supports change data capture to stream ongoing changes from the source during the migration process. | ||
|
||
In contrast, [pg_dump/restore][pg-dump-and-restore] only supports copying the database from the source to the target without capturing ongoing changes, which results in downtime. On the other hand, the [dual-write and backfill] method requires setting up dual write in the application logic. This method is recommended only for append-only workloads as it does not support updates and deletes during migration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In contrast, [pg_dump/restore][pg-dump-and-restore] only supports copying the database from the source to the target without capturing ongoing changes, which results in downtime. On the other hand, the [dual-write and backfill] method requires setting up dual write in the application logic. This method is recommended only for append-only workloads as it does not support updates and deletes during migration. | |
In contrast, [pg_dump/restore][pg-dump-and-restore] only supports copying data | |
from the source to the target without recording ongoing changes, so | |
applications which are writing must be stopped for the duration of the | |
migration. On the other hand, the [dual-write and backfill] method also | |
provides a way to migrate with low downtime, but requires modifying your | |
application to write to two databases simultaneously, and only works with | |
append-only workloads as it does not support updates and deletes during | |
migration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should remove import SourceTargetNote from "versionContent/_partials/_migrate_source_target_note.mdx";
from import list.
source database. This is the downtime phase and will last until you have | ||
completed the validation step (4). Be sure to go through the validation step | ||
before you enter the downtime phase to keep the overall downtime minimal. | ||
Once the lag between the databases is below 130 megabytes, we recommend performing data integrity checks. There are two ways to do this: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once the lag between the databases is below 130 megabytes, we recommend performing data integrity checks. There are two ways to do this: | |
Once the lag between the databases is below 30 megabytes, we recommend performing data integrity checks. There are two ways to do this: |
The code waits for lag to be 30MB.
Stopping writes to the source database allows the live migration process to | ||
finish replicating data to the target database. This will be evident when the | ||
replication lag reduces to 0 megabytes. | ||
1. With downtime: Stop database operations from your application, which will result in downtime. This allows the live migration to catch up on the lag between the source and target databases, enabling the validation checks to be performed. The downtime will last until the lag is eliminated and the data integrity checks are completed. | ||
2. Without downtime: Since the difference between the source and target databases is less than 130 MB, you can perform data integrity checks, excluding the latest data that is still being written. This approach does not require taking your application down. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should make this change. We should let the users have 100% confidence on data integrity and not promote "partial integrity" by comparing non-recent data, otherwise he may have doubts later on regarding integrity. Note, the users will always have to take momentary downtime to switch applications writing from source to target.
1. With downtime: Stop database operations from your application, which will result in downtime. This allows the live migration to catch up on the lag between the source and target databases, enabling the validation checks to be performed. The downtime will last until the lag is eliminated and the data integrity checks are completed. | ||
2. Without downtime: Since the difference between the source and target databases is less than 130 MB, you can perform data integrity checks, excluding the latest data that is still being written. This approach does not require taking your application down. | ||
|
||
Now that the data integrity checks are complete, it's time to switch your target database to become the primary one. If you have selected option 2 for data integrity checks, stop writing to the source database and immediately start writing to the target database from the application. This will minimize application downtime to as low as application restart. It allows the live migration process to complete replicating data to the target database, as the source will no longer receive any new transactions. You will know the process is complete when the replication lag reduces to 0 megabytes. If you have chosen option 1 for data integrity checks, start your application to write data to the target database. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should not make this change. It complicates the matter. Let's have just 1 way to confirm data integrity and be 100% confident that all the data in my source/production database is migrated to Timescale.
Once the lag between the databases is below 30 megabytes, and you're ready to | ||
take your applications offline, stop all applications which are writing to the | ||
source database. This is the downtime phase and will last until you have | ||
completed the validation step (4). Be sure to go through the validation step | ||
(4) before you enter the downtime phase to keep the overall downtime minimal. | ||
Once the lag between the databases is below 130 megabytes, we recommend performing data integrity checks. There are two ways to do this: | ||
|
||
Stopping writes to the source database allows the live migration process to | ||
finish replicating data to the target database. This will be evident when the | ||
replication lag reduces to 0 megabytes. | ||
1. With downtime: Stop database operations from your application, which will result in downtime. This allows the live migration to catch up on the lag between the source and target databases, enabling the validation checks to be performed. The downtime will last until the lag is eliminated and the data integrity checks are completed. | ||
2. Without downtime: Since the difference between the source and target databases is less than 130 MB, you can perform data integrity checks, excluding the latest data that is still being written. This approach does not require taking your application down. | ||
|
||
Now that the data integrity checks are complete, it's time to switch your target database to become the primary one. If you have selected option 2 for data integrity checks, stop writing to the source database and immediately start writing to the target database from the application. This will minimize application downtime to as low as application restart. It allows the live migration process to complete replicating data to the target database, as the source will no longer receive any new transactions. You will know the process is complete when the replication lag reduces to 0 megabytes. If you have chosen option 1 for data integrity checks, start your application to write data to the target database. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should make this change. It complicates the validation work and promotes partial integrity checks. We should discuss this on team call if you think its important to mention without downtime
option.
-v ON_ERROR_STOP=1 \ | ||
--echo-errors \ | ||
-f roles.sql \ | ||
-f pre-data-dump.sql | ||
``` | ||
## 4. Perform Live Migration | ||
|
||
## 5. Perform "live migration" | ||
The remaining steps for migrating data from a RDS Postgres instance to Timescale | ||
with low-downtime are the same as the ones mentioned in "Live migration" | ||
documentation from [Step 5] onwards. You should follow the mentioned steps | ||
documentation from [Step 3] onwards. You should follow the mentioned steps | ||
to successfully complete the migration process. | ||
|
||
[live migration]: /migrate/:currentVersion:/live-migration/live-migration-from-postgres/ | ||
[Step 5]: /migrate/:currentVersion:/live-migration/live-migration-from-postgres/#5-enable-hypertables | ||
[Step 3]: /migrate/:currentVersion:/live-migration/live-migration-from-postgres/#3-run-the-live-migration-docker-image |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not right. We are asking users to continue from Step 3 in live-migration-from-postgres
page. The step 3 there is
docker run --rm -dit --name live-migration \
-e PGCOPYDB_SOURCE_PGURI=$SOURCE \
-e PGCOPYDB_TARGET_PGURI=$TARGET \
-v ~/live-migration:/opt/timescale/ts_cdc \
timescale/live-migration:v0.0.1
Since the above command already has pgcopydb follow
internally, we should not ask users to perform step 3 in this page which also includes pgcopydb follow
.
The correct way should be to ask users to continue from Step 3 in live-migration-from-postgres
page right from step 3 on this page (by replacing step 3 on this page).
@VineethReddy02, would you like me to work on the comments made for this issue or shall we close it? |
It seems like the PR has become outdated, and it might be best to close it and create a new one. There have been notable changes in the live migration documentation since this PR was initially submitted. |
Description
Update live migration docs with fixes and better explanation on database switchover.
Links
Fixes #[insert issue link, if any]
Writing help
For information about style and word usage, see the style guide
Review checklists
Reviewers: use this section to ensure you have checked everything before approving this PR:
Subject matter expert (SME) review checklist
Documentation team review checklist
and have they been implemented?