Replies: 4 comments 5 replies
-
As a user, I am okay with with a process where I might upgrade the tools on a test machine where the new tools may show warnings for the old database maybe. If the new tool could make suggestions or even just show the issue number related to the change, that would help me with fixing things. |
Beta Was this translation helpful? Give feedback.
-
Chasing backports to the CLI is going to be horrible, especially across breaking changes, we're already doing this enough for it to be a burden TBH. In that context, as we discussed,
Also we need to start rolling up the old migrations into the "base" schema to enforce the N-1, and make tracking things down over time. |
Beta Was this translation helpful? Give feedback.
-
Are “more releases” an option that would help? Instead of back porting changes plan 2 releases (R1, R2) where the first warns on identified problems and then (maybe optionally) enforces not-causing problems if the system is “ready” for the second release. The second release then enforces everything required. Release notes for R2 mention needing to move from R1 for an in place, hands off upgrade. |
Beta Was this translation helpful? Give feedback.
-
You're already doing great. I'll just throw some crazy ideas in here and you'll know best.
Both ways, you would inform the operator about roadblocks during the migration. |
Beta Was this translation helpful? Give feedback.
-
From rc.15 to rc.16 we encountered a number of user issues due to incorrect assumptions on our part as developers.
But we learn from mistakes and improve. As a result we want to have more robust processes to limit migration issues like this.
Migrations generally fall into these categories:
Generally when we have internal migrations, we aim for them to be seamless. One version upgrade we actually completely rewrote your entire database under the hood without anyone noticing. This is what we want! Generally we aim for the first 3 - addition, loosening and in place modification (such as the db example).
But sometimes we have to unfortunately tighten restrictions. Past examples (non-exhaustive) include the restriction of allowed characters in names, removing webauthn credentials that don't allow user verification and converting oauth2 configurations into "accounts". The final one was a restriction because previously oauth2 configurations were only expected to have unique names within the set of all other oauth2 configurations, but we had to convert this so that oauth2 configurations names must also be unique with respect to groups and names (For more, see #2217 )
In the last upgrade this change caused quite a few problems, and we didn't have the right tools in place to assist people to fix errors they encountered. We made a bad assumption (unlikely name collisions) when in fact it was common.
From rc.16 to 1.2.0 we have another major change where data is restricted. #2601 correctly noted that we were able to allocate posix uid/gid numbers into ranges that systemd has reserved for itself, but also that half the uid/gid range can confuse the linux kernel. As a result, we need to constrain these values.
There is a very high probability that this will affect all users with posix accounts in their instances. This is because more than 50% of the uid range we were allocating in is now not able to be used.
As a result, we want to avoid the mistakes of last release and allow you as administrators to have more time to deal with this problem. We also want to use this as a model for future migrations. A key value in our minds is that we want upgrades to be "hands free". You shouldn't have to interact with the container or server when you choose to do the upgrade.
Currently our thinking is:
We are certainly open to other ideas and suggestions on how we can better handle this, especially when we have to tighten restrictions that can have significant impacts on your deployments.
Beta Was this translation helpful? Give feedback.
All reactions