New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Knex:Error Pool2 - error: too many connections for role #1027
Comments
Update: I've confirmed that this happens when we first get the error that there are too many connections for our role.
|
frequently getting a similar issue here too.. not sure how to deal with it |
I've also been seeing a lot of these, seems like maybe a new issue. We shouldn't have to call |
I'm seeing these mostly on heroku - I've heard that it has a some bugs with Postgres connections. |
@JDillon522 @tomatau @tjwebb do either of the suggestions in #975 help with this? |
I've been using |
@blah238 I will try and report back |
just ran into this problem myself, we're performing a lot of writes. might have to move away from knex because of this problem |
Any new suggestions for this? Still getting |
@tomatau Sorry, just throwing out ideas. I don't have a good understanding of the issue here. I think we would need a detailed repro case to be able to do anything about it. |
Been having a lot of issues with this while trying to run migrations -- connecting to heroku seems to be the source of all the issues. Knex:Error Pool2 - error: too many connections for role "databasename" Just all over the place. |
It helped for me to force sharing the knex connection instance within the same heroku deploy for different tasks and making sure it's below the free 20 connections. |
I encountered this error yesterday. In my case, it's because I create Knex instance every time when I need to access the database. But each Knex instance maintains a connection pool internally, and by default, each connection pool has at least 2 connections alive. Then you know how many connection pools will be created and how many connections will be open if I create knex instance every time. The fix is simple in my case: just make knex singleton, that is, share the knex instance in the application. @tgriesser Am I missing something? If what I mentioned above is correct, I would suggest to emphasize this in the documentation, so users will know they need to share Knex instance in the app. Or, maybe a better approach is to share connection pool inside knex. |
@mouhong There are various reasons why people likes to have multiple connection pools / knex instances, so making knex internally to share always the same pool would be a problem... I always thought that it was obvious that every instance has separate pool, since one can give pool configuration as a parameter for constructor. Anyways if you have good suggestion where and how to add it to documentation, please send a pull request, I would be happy to merge it. Documentation page is |
@elhigu but the pool configuration is optional. Starters might not be aware of it. I think it's good to have a "safe by default" design, that is, user can use a very small set of parameters to create Knex instance and then start using it without problems. Anyway, for the doc, I think we can add a "notice" or "best practice" at the end of the Initializing the library section. But English is not my primary language and I'm not quite good at English writing. It might be a bit difficult for me to write formal docs. :( |
@mouhong did the singleton approach work for you? we're having this exact same issue with Knex and Postgress on Azure. |
Updates? I'm facing the same problem on AWS Lambda + RDS |
Closing, Pool2 has not been used since knex ~0.11 here is some info about knex and aws lambda #1875 |
@myndzi
We're having issues with our database lately. Our settings are:
We are not calling
knex.destroy()
anywhere so we aren't explicitly destroying any connections. Here is the full stacktrace from the error:We're at a loss. Any help on how to debug this would be appreciated. Its especially hard since the error is intermittent. It'll be fine for a day or two with active development and then reappear and halt everything.
The text was updated successfully, but these errors were encountered: