Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide a way to control the backfill rate #168

Open
andrew-farries opened this issue Oct 4, 2023 · 1 comment
Open

Provide a way to control the backfill rate #168

andrew-farries opened this issue Oct 4, 2023 · 1 comment
Labels
enhancement New feature or request
Milestone

Comments

@andrew-farries
Copy link
Collaborator

andrew-farries commented Oct 4, 2023

A way to control the rate at which at backfilling occurs would avoid potential overload on the Postgres instance due to the large amount of I/O that backfilling a large table can incur.

This, or some other way to solve the problem of expensive migration starts, is likely required to use pgroll on very high-traffic databases.

This was raised as an issue on HN [1][2].

@andrew-farries andrew-farries added the enhancement New feature or request label Oct 4, 2023
@ZombieFoodDan
Copy link

It is not only about overwhelming the main PostgreSQL server you are running the migration on, or only 'very high-traffic databases'. If you have a standby server on the other side of a questionable network you are going to be outpacing the speed the WAL files can send with any mass change on a large enough table, which will risk the standby getting too far behind, the primary not keeping a WAL file long enough, and the standby now being useless until re-setup. People might need to restrict to some fairly slow speed that can keep up with that network. (Or even less, to not overwhelm/delay other uses of that network.)

Having some sort of 'delay' parameter between batches (and a way to tweak batch size also (which may exist, I haven't read all of the docs or code yet)) would allow people to tweak the backfill for their environments (in a perfect world these settings would also have the ability to be modified by an environmental configuration, not only within the migration code, so that people who have the same db in many different environments can use a single migration file for all of them with reasonable modifiers on delay/size for each, and if you checked the environment setting between every batch, you could slow down an in progress migration to let systems/networks recover without needing to actually kill it mid migration upon discovering an environment not keeping up.

(having a batch size parameter that could be set to 1 will also prevent possible deadlock scenarios when dealing with tables that the application likes to update, assuming that a batch size of 1 means 1 record per transaction.)

@andrew-farries andrew-farries added this to the v1 milestone Apr 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants