Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: drop support for pg 9.6 #2052

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

wolfgangwalther
Copy link
Member

Not to be merged right now, just putting this here for later.

PG 9.6 has reached EOL already. Once the next set of minor releases for PG comes out, it will not have received the next update anymore. I think this will be the time when we should remove for it, too. People should update from unsupported versions - and if they don't, they are most likely ok with using older PostgREST versions, too.

This will be a breaking change and as such we should wait until we have at least one more breaking change committed. This will allow us to release a bug-fix release without a major version bump, just in case we need one.

@wolfgangwalther
Copy link
Member Author

The next release will be a major release, so we could merge this now. nix has already removed pg9.6 from their repo. The next set of pg minor releases is right around the corner and at this time pg9.6 will not have received an update for the first time.

No need to test it anymore. People should upgrade anyway.

@steve-chavez
Copy link
Member

Some days ago, I received an email from pgsql-jobs that said

Desired Experience:
8+ years experience working with PostgreSQL (primarily 9.6,

So obviously there are big companies yet using old postgres versions.

People should upgrade anyway.

Since there are legacy dbs(lots and lots, likely) with old versions lying around that work fine in production, new patches might not be an incentive to put the ops work to upgrade since: if ain't broken don't fix it. Many dbs are air-gapped, new vulnerabilities might not affect them at all.

Can't find the comment right now, but I remember a user on gitter commenting about how minds were blown when he put a legacy db to use when pointing postgrest at it(who knows, maybe later they upgraded their pg version).

Yeah, legacy dbs can use old postgrest versions but things like the EMFILE fix are critical for stable production usage or first-class computed columns might also be needed.

So really dropping old versions just causes us to lose users and many legacy systems use cases.

Backwards compatibility has a lot of value, and if the effort needed for having this was a lot, then I'd agree in not doing it, but it only requires some conditionals in the code, I see it as a minimal effort(Nix is already in place, ensuring our working pg 9.6).

Not saying we should go back to supporting pg 9.3(IIRC the only critical one since it introduces json functions), but maybe we should stop at 9.6.

WDYT?

@wolfgangwalther
Copy link
Member Author

Since there are legacy dbs(lots and lots, likely) with old versions lying around that work fine in production, new patches might not be an incentive to put the ops work to upgrade since: if ain't broken don't fix it. Many dbs are air-gapped, new vulnerabilities might not affect them at all.

I see that point - but in that case, they wouldn't bother to touch PostgREST either.

Yeah, legacy dbs can use old postgrest versions but things like the EMFILE fix are critical for stable production usage or first-class computed columns might also be needed.

EMFILE can be worked around by starting postgrest with a higher softlimit for FD in the first place, either via systemd or docker.

But yeah, I see your point: I would have liked to release a minor before the next major, too. There were quite a lot of bugs fixed in the meantime. I still think we can revert the two breaking changes temporarily and focus on the two issues you mentioned to release in a minor.

So really dropping old versions just causes us to lose users and many legacy systems use cases.

Backwards compatibility has a lot of value, and if the effort needed for having this was a lot, then I'd agree in not doing it, but it only requires some conditionals in the code, I see it as a minimal effort(Nix is already in place, ensuring our working pg 9.6).

It's a double-edged sword, really. On the one side, backwards compatibility is great, yes. On the other side, it does promote the "I'm not going to touch this" mentality - which is a bad thing, imho. Today, we have so many tools available to do continuous integration etc., that I think the right approach is to "update fast and often".

Not saying we should go back to supporting pg 9.3(IIRC the only critical one since it introduces json functions), but maybe we should stop at 9.6.

I certainly don't think we should "stop" anywhere. I think we should make it a rule, that we're dropping unsupported PG versions eventually.

Now, I think, the real problem is not that the next major version would not support PG9.6. The real problem is that we don't maintain old versions for bugfixes etc. - like PostgreSQL does. This would force users to upgrade to a new PostgREST major version, just to get some bugs fixed. We certainly don't have the resources to maintain back-branches, but what about the following:

  • Make it a documented procedure, that we're dropping unsupported PG versions, once a new set of PostgreSQL minor releases without that version has been made.
  • Once that happens, we drop support in the next major PostgREST version
  • But we also make it a rule, that we will always release at least patch/minor release between majors. So we will not merge any more breaking changes after a major until we got a bugfix release out.

I think this would be a fair compromise.

And in this specific case, we should certainly fix the EMFILE thing + add the first-class virtual columns (I have WIP locally), because the latter will allow users to upgrade to postgrest 9.0.0 in the first place. And if that wasn't the case, they wouldn't benefit from the solved EMFILE problem.

@steve-chavez
Copy link
Member

steve-chavez commented Feb 5, 2022

On the other side, it does promote the "I'm not going to touch this" mentality - which is a bad thing, imho. Today, we have so many tools available to do continuous integration etc., that I think the right approach is to "update fast and often".

Yeah, but there are many orgs that don't adopt best practices. If anything, postgrest working on their old dbs might prompt them to "modernize" and upgrade since their dbs won't be treated anymore as dumb storage.

And in this specific case, we should certainly fix the EMFILE thing + add the first-class virtual columns (I have WIP locally), because the latter will allow users to upgrade to postgrest 9.0.0 in the first place. And if that wasn't the case, they wouldn't benefit from the solved EMFILE problem.

Very true.

I still think we can revert the two breaking changes temporarily and focus on the two issues you mentioned to release in a minor.

Agree, we should do it, diverging from the pre-release is not so critical since we make no compromises there.

  • Make it a documented procedure, that we're dropping unsupported PG versions, once a new set of PostgreSQL minor releases without that version has been made.
  • Once that happens, we drop support in the next major PostgREST version
  • But we also make it a rule, that we will always release at least patch/minor release between majors. So we will not merge any more breaking changes after a major until we got a bugfix release out.

+1 on the third one, not sure about the first 2. If we can agree in that the effort of supporting 9.6 is minimal, why not keep doing it?

I do agree in that we can deprecate eventually, but compromising on a number of minor releases.. I don't think we can be sure that will be a short time or a long time.

@wolfgangwalther
Copy link
Member Author

Hm. Dropping support for an older PG version does not mean that new users on those old PG versions can't start using PostgREST. They can still use an older PostgREST version. We could even have a matrix in the readme or docs, showing clearly up until which pgrst version we supported which pg versions. Including a direct link to the release / download page of course.

But since we're not maintaining back branches, this will only work when our latest minor release for each major release is relatively free of bugs.

I do agree in that we can deprecate eventually, but compromising on a number of minor releases.. I don't think we can be sure that will be a short time or a long time.

I see that for sure, that's why we should decouple those two things:

  • Removing support for old PG releases should happen based on the PG release cycle. Just saying "we will eventually do it" is going to be a pain. Both for us (because we will have the same discussion everytime :D) and for our users, because they can't plan for it.
  • Us bumping to a new major version should depend on how sure we are that we are relatively bug-free.

To achieve a stable patch/minor release, I think we should even go one step further: After a major *or minor release, we should require a patch release, at least when we know about bugs. So we should basically say:

  • t0: Major release
  • At this time, we are not merging any breaking changes and no new features - only fix:
  • t1 = t0 + x weeks: patch release
  • Now we merge new features, but not breaking changes.
  • t2 = t1 + y weeks: minor release
  • Now we merge only bug fixes, probably for those new features...
  • t3 = t2 + z weeks: patch release
  • Now we merge breaking changes and everything we want to.
  • And then we release a new major

Basically formalize our development / release process a little bit. And make use of all those shiny minor and patch versions we have available - we're not using the full potential, when we're bumping from 8.0 to 9.0 ;).

We could also say, that this cycle only applies when we are about to drop support for an older PG versions - but when we are not, we can move quicker. Ultimately we want to have a quite stable version before dropping PG 9.6, right?

Of course, now we need to solve for x, y and z.

@steve-chavez
Copy link
Member

Ultimately we want to have a quite stable version before dropping PG 9.6, right?

Yes, besides the fixes, I want to have the multiple db pool since that will make us stable under high load. I think a new minor version should be enough for this to get done, so no problem from my side there.

Removing support for old PG releases should happen based on the PG release cycle.

Maybe we can say after an EOL we'll drop support on a major version, which will not happen unless there are some minor/patch versions, so really dropping a pg version could be like a year after the EOL.

Of course, now we need to solve for x, y and z.

Perhaps it's enough to commit to a number of releases per year, 8.0.0 was particularly bad in this regard, which was made 1 year and 2 months after 7.0.1. How about 3 releases per year, so each one every 4 months approx.

@steve-chavez
Copy link
Member

There were quite a lot of bugs fixed in the meantime. I still think we can revert the two breaking changes

Another option, we release the major version but hold on this PR for the next major, moving forward we agree to always drop an EOLed pg on a major. That way we save some work in reverting the commits.

@wolfgangwalther
Copy link
Member Author

I want to have the multiple db pool since that will make us stable under high load.

I have heard this 3 times now - each in a single sentence, but nothing else about it anywhere. I think this is something that should be discussed in an issue first - like all other enhancements. I don't see that working well, yet.

Removing support for old PG releases should happen based on the PG release cycle.

Maybe we can say after an EOL we'll drop support on a major version, which will not happen unless there are some minor/patch versions, so really dropping a pg version could be like a year after the EOL.

Well, yes, this is what I said. I don't think we should force a new pgrst major because of an EOLed PG version. But once we have a different breaking change, and we need to make a new major release, we should execute on the EOLed PG version and drop it. However, we can also increase the timespan for doing so: What about dropping support for any PG version that is EOL for more than 1 year? That would mean once PG 15 is released, we drop support for PG 9.6 - which has then been EOL for more than 1 year. This was basically what we did with PG 9.5 in November.

Perhaps it's enough to commit to a number of releases per year, 8.0.0 was particularly bad in this regard, which was made 1 year and 2 months after 7.0.1. How about 3 releases per year, so each one every 4 months approx.

Hm. Personally, I think 3 releases per year... are not nearly enough. I think we should release far more often, especially when we have bugfixes lined up. There is no point in making people wait for bugfixes a couple of months. Releasing often is only possible, though, once we really solve the "write docs immediately" problem. Otherwise missing docs updates will always delay that process. I'm still convinced that moving the docs to the main repo would be the best way to solve this, as this would allow to easily block merging PRs without docs.

Speaking of docs in the main repo, maybe we can use a slightly adjusted workflow, that does not require us to merge the two repos and change a lot of infrastructure:
What if we integrated the latest main branch of the docs repo into the main repo - and then commit to only make changes to the docs in the main repo going forward. Then we set up a CI job that will merge any changes from the main repo docs to the docs repo. We can then still keep the docs repo to work with readthedocs. And we can use this repo to cherry-pick certain changes to the backbranches - something that I wouldn't want to do on the main repo. This would require us to not merge anything directly into postgrest-docs/main to avoid merge conflicts - and to avoid the need to merge those changes back to the main repo, which will be annoying.

The big upside: We can always go back to the two repo approach, because we can just stop the auto-merge job and do it manually again. So we could try this out and if it doesn't work well, no problem.

@wolfgangwalther
Copy link
Member Author

wolfgangwalther commented Feb 6, 2022

Another option, we release the major version but hold on this PR for the next major, moving forward we agree to always drop an EOLed pg on a major. That way we save some work in reverting the commits.

The reverts are not complicated. What we do here depends on which "drop EOLed versions" strategy we agree on:

  • With the first PG set of minor releases without the EOLed version (my first suggestion)
  • With the next PG major, i.e. one year after EOL.

In the first case, we should do the reverts now. In the second case, we don't need to and we keep this PR open until then.

@steve-chavez
Copy link
Member

I have heard this 3 times now - each in a single sentence, but nothing else about it anywhere. I think this is something that should be discussed in an issue first - like all other enhancements. I don't see that working well, yet.

Sorry about that, you're right. (I need to be more trigger happy with issues)

Releasing often is only possible, though, once we really solve the "write docs immediately" problem.

True, I've also noticed this lately.

What if we integrated the latest main branch of the docs repo into the main repo - and then commit to only make changes to the docs in the main repo going forward. Then we set up a CI job that will merge any changes from the main repo docs to the docs repo.

We'd lose the RTD previews with this though, right? That does help a bit when reviewing. Or could we make previews work?

With the next PG major, i.e. one year after EOL.

Sold. I like that one.

@wolfgangwalther
Copy link
Member Author

We'd lose the RTD previews with this though, right? That does help a bit when reviewing. Or could we make previews work?

I think that would be no problem. We could just create another RTD project just to build those. That should be simple.

@steve-chavez
Copy link
Member

I think that would be no problem. We could just create another RTD project just to build those. That should be simple.

Ah, that could work. Not opposed, but perhaps there's a much simpler way.

Releasing often is only possible, though, once we really solve the "write docs immediately" problem.

How about if, similar to the CHANGELOG step, we make it a requirement to write docs before merging a PR.

GitHub has branch protection, two options:

  • We require approvals to merge. Unfortunately this also requires to make a pull request to merge, which breaks the pre-release workflow.
  • We require conversation resolution before merging, could be as simple as pointing in the CHANGELOG entry that "this needs docs". This doesn't require a pull request for every merge.

We agree to honor that(mostly me, since I tend to leave docs for much later), so docs will not stay behind.

@wolfgangwalther
Copy link
Member Author

Ah, that could work. Not opposed, but perhaps there's a much simpler way.

I don't see any - and adding another project is actually quite simple. Did so a few times when trying the translation stuff, I now know which buttons to click.

How about if, similar to the CHANGELOG step, we make it a requirement to write docs before merging a PR.
[...]
We agree to honor that(mostly me, since I tend to leave docs for much later), so docs will not stay behind.

Well, yes. We should do that.

To help that process, having the docs in the main repo is great. One benefit I see here, is that it will make it easier for you to not fall behind, sure. But for me, the even bigger effect is in my own workflows: It's quite annoying to always have to open another PR in the docs repo, even for such small changes as PostgREST/postgrest-docs#503. The mere fact that I have to switch to a different repo, open another MR etc. causes quite a delay. If that wasn't the case, I would have probably merged the corresponding PR here a week earlier or so. I am lazy too, and I have to fight hard to not delay the docs-writing to "later".

We require conversation resolution before merging, could be as simple as pointing in the CHANGELOG entry that "this needs docs". This doesn't require a pull request for every merge.

Where can I find that? Just went through the settings in a different repo and didn't, yet.

@steve-chavez
Copy link
Member

The mere fact that I have to switch to a different repo, open another MR etc. causes quite a delay. If that wasn't the case, I would have probably merged the corresponding PR here a week earlier or so. I am lazy too, and I have to fight hard to not delay the docs-writing to "later".

True. Cool, let's try the latest docs in the main repo approach and see how it goes.

Where can I find that? Just went through the settings in a different repo and didn't, yet.

I see it on Settings -> Branches

2022-02-10-121611_1065x645_scrot

@steve-chavez
Copy link
Member

PostgreSQL Upgrades Are Hard

One reason to keep supporting old versions ^

@wolfgangwalther
Copy link
Member Author

Instead of fixing #3222, I'd like to drop support for PG 9.6 and 10.0. Personally, I would like to just drop support for v11.0, too, but we can also wait a while with that. I agree that upgrading pg major is relatively hard. But upgrading pg minor versions is not, so I think we can raise the minimum required version to v11.22 (latest 11 release) instead of v11.0 only. This allows us a few more simplifications in the tests. Since this will only affect users actually updating PostgREST to the next major version, we can expect them to at least update PostgreSQL to the latest minor.

Removing those old pg versions makes the code and tests easier, but also has the advantage that we have fewer jobs to run in CI and more importantly fewer dependencies to carry around via nix. Those are a big factor for CI performance - and the legacy PG versions actually carry more weight than current versions, because we need to import an older version of nixpkgs for them. While this works, this also means we get multiple versions for many dependencies, too... so it's actually beneficial for us to drop those.

It would be even better, of course, if we could drop v11, too - and then basically adopt a policy to drop support for PG versions as soon as they are dropped from nix.

@wolfgangwalther wolfgangwalther force-pushed the drop-postgres-96 branch 2 times, most recently from 4b120c5 to 4563d49 Compare February 10, 2024 18:56
@wolfgangwalther
Copy link
Member Author

we can raise the minimum required version to v11.22 (latest 11 release) instead of v11.0 only.

Had to lower that to 11.21, because the legacy nixpkgs we import actually only has 11.21.

@steve-chavez
Copy link
Member

This will require a major version right? It can be considered a breaking change?

@wolfgangwalther
Copy link
Member Author

This will require a major version right? It can be considered a breaking change?

Yes. Do you want to make a minor release before we merge this or should I go ahead?

@wolfgangwalther
Copy link
Member Author

Removing those old pg versions makes the code and tests easier, but also has the advantage that we have fewer jobs to run in CI and more importantly fewer dependencies to carry around via nix. Those are a big factor for CI performance - and the legacy PG versions actually carry more weight than current versions, because we need to import an older version of nixpkgs for them. While this works, this also means we get multiple versions for many dependencies, too... so it's actually beneficial for us to drop those.

It would be even better, of course, if we could drop v11, too

Here are some numbers. The closure of withTools, which we are using in many of the nix based jobs is:

  • 7.9G on main
  • 7.8G after removing pg9.6 and pg10
  • 6.3G after removing pg11 additionally

We are carrying 1.5G of additional dependencies because of importing the old nixpkgs snapshots. Removing only 9.6 and 10 doesn't help much, yet - but removing all three gives us a boost.

@wolfgangwalther
Copy link
Member Author

One argument for not removing support for old PG versions immediately when they become EOL was, that PostgREST users on legacy versions of PG might need to update because of bugfixes. Since we are now doing release branches and backporting of bugfixes, this is not that much of a problem anymore.

Another concern was that new users would like to point PostgREST at their legacy databases. We can counter this with a pg/pgrst support version matrix in the docs / readme. If we point out which version of PostgREST to use to connect against a PostgreSQL server of version X, then that should help already. Users can always take older versions - all older releases are easily available through docker and github releases.

We might even put this info in the source itself, so when you try to connect with postgrest 13 to a pg 9.6 database, you will get a message that the last version to support pg9.6 was postgrest 12.x etc.

@steve-chavez WDYT?

@steve-chavez
Copy link
Member

We might even put this info in the source itself, so when you try to connect with postgrest 13 to a pg 9.6 database, you will get a message that the last version to support pg9.6 was postgrest 12.x etc.

@wolfgangwalther Agree. That sounds really nice!

Yes. Do you want to make a minor release before we merge this or should I go ahead?

Could you make the minor release and then merge this? (I'm AFK now)

@wolfgangwalther
Copy link
Member Author

We might even put this info in the source itself, so when you try to connect with postgrest 13 to a pg 9.6 database, you will get a message that the last version to support pg9.6 was postgrest 12.x etc.

Agree. That sounds really nice!

If we do this, would you agree with removing support for PG 11 at the same time and then change our policy to say that we're only supporting those postgres versions that are still in nixpkgs?

This would ensure that our dependencies for CI (and local dev) become smaller and stay smaller.

Could you make the minor release and then merge this? (I'm AFK now)

Let's discuss #3113 (comment) first?

@steve-chavez
Copy link
Member

steve-chavez commented Feb 11, 2024

If we do this, would you agree with removing support for PG 11

Agree with removing pg 11. My reasoning before was that there wasn't much burden in supporting older pg versions so why not keep doing it, but it's different now that there's trouble with Nix.

at the same time and then change our policy to say that we're only supporting those postgres versions that are still in nixpkgs?

I think it'd be better to say that our policy is to support all non-EOL pg versions. I suppose that's the same policy that nixpkgs adopts.

PostgreSQL 11 is EOL already, so we only support the latest minor release, which
is 11.22. Since the legacy nixpkgs we import PG in for our tests only has 11.21,
we are happy with that as well.
PostgreSQL 11 is EOL since November 2023.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants