You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A way to control the rate at which at backfilling occurs would avoid potential overload on the Postgres instance due to the large amount of I/O that backfilling a large table can incur.
This, or some other way to solve the problem of expensive migration starts, is likely required to use pgroll on very high-traffic databases.
It is not only about overwhelming the main PostgreSQL server you are running the migration on, or only 'very high-traffic databases'. If you have a standby server on the other side of a questionable network you are going to be outpacing the speed the WAL files can send with any mass change on a large enough table, which will risk the standby getting too far behind, the primary not keeping a WAL file long enough, and the standby now being useless until re-setup. People might need to restrict to some fairly slow speed that can keep up with that network. (Or even less, to not overwhelm/delay other uses of that network.)
Having some sort of 'delay' parameter between batches (and a way to tweak batch size also (which may exist, I haven't read all of the docs or code yet)) would allow people to tweak the backfill for their environments (in a perfect world these settings would also have the ability to be modified by an environmental configuration, not only within the migration code, so that people who have the same db in many different environments can use a single migration file for all of them with reasonable modifiers on delay/size for each, and if you checked the environment setting between every batch, you could slow down an in progress migration to let systems/networks recover without needing to actually kill it mid migration upon discovering an environment not keeping up.
(having a batch size parameter that could be set to 1 will also prevent possible deadlock scenarios when dealing with tables that the application likes to update, assuming that a batch size of 1 means 1 record per transaction.)
A way to control the rate at which at backfilling occurs would avoid potential overload on the Postgres instance due to the large amount of I/O that backfilling a large table can incur.
This, or some other way to solve the problem of expensive migration starts, is likely required to use
pgroll
on very high-traffic databases.This was raised as an issue on HN [1][2].
The text was updated successfully, but these errors were encountered: