-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Best practice #461
Comments
It says in the code comment(s):
... So I would guess no. Worst case they timeout or get closed automatically based on the configuration for maximum allowed idle connections? |
Also, see http://go-database-sql.org/accessing.html (the part after the numbered list) |
@benguild how did you configure Any idea where we can get best practice principles on setting them for different load conditions? |
For more than a year, I've used this driver by using Upon hearing this advice, I now open 1 connection, leave the reference in a global variable and keep using it. I never close it. I haven't set This technique simple doesn't work for me. The number of active connections to CloudSQL keeps increasing until it hits the Google App Engine connection limit. After that, I can't connect to database. The connections are not getting closed. |
The Go sample for CloudSQL calls I wonder if this issue is specific to CloudSQL and not tied to the driver itself? Maybe @broady has some insight? |
Well I'm now not closing. That's when the issues started. |
I wrote that sample code with the understanding that the connection would be long lived. I have actually never used SQL from Go in a production app. Is it disconnecting because of a timeout (therefore we need a periodic keep alive ping) or because the socket is broken? Who should handle this? The driver or the user? |
Actually, what I said doesn't many any sense. The sql package should handle bad connections in its pool. Did we limit this issue to this driver or is it a bug in database/sql? |
@pjebs I'm calling those right after it opens the connection. Try calling these as soon as your connection is open: db.DB().SetConnMaxLifetime(time.Minute*5);
db.DB().SetMaxIdleConns(0);
db.DB().SetMaxOpenConns(5); See if it exceeds 5 connections ever, and if any of the connections idle past 5 minutes. Let me know. I would think that it would hang if there weren't any idle connections in the pool available, but I haven't read the code so it might just fail in that case. |
I wonder if it's a problem with the application code. If the handlers aren't closing the result sets then the driver will have to open more and more connections because the existing ones are still in use. When the connection was opened and closed for each request the connections held open by in-progress queries are terminated so they never pile up. That would explain why the problem appeared with the switch to the single global persistent connection. |
Also, @pjebs, check to make sure (obviously) that you're not somehow creating an additional database pool entirely by accident somewhere in your code. If there's a new pool being allocated somewhere that could slowly cause the issue. If you were calling The open/close with each request method would work too as long as your database connection latency isn't that bad and the pool is successfully deallocated afterward, but I think in most settings using the pool is better as long as two separate applications aren't (for example) keeping lots of connections open for a long period of time and could maybe block the other from allocating some even if those are idle. |
@dgryski Yeah, I was just thinking that as well (see above). During the switch from a single use to a retained pool there could be an instance where another pool was being created somewhere in the code and never being deallocated or something, maybe. |
Another issue that is related: With my previous method, for debug purposes I use to do this (to avoid a foreign key constraint error):
it use to ALWAYS work. Now that I am using 'connection pooling', this never works. I understand that. But, how come it worked in my prevous method. Although I always created a new connection and closed it at end of request cycle, the 'connection' I create would still utilise connection pooling? |
@pjebs the connection pool may open multiple connections. If you only need a single connection at a time then you'll stay with one. In your old case you never had two concurrent database requests on the same pool so only one connection was ever made. Thus, all Execs happened on the same connection. Once you moved to proper pools this is no longer the case. This would be easier to debug if we could see more of your code. |
@pjebs you need to do that as a database transaction to ensure it hits the same connection: http://jinzhu.me/gorm/advanced.html#transactions Although I'm unclear on why it'd be doing what you speak of, it's not completely out of line for it to use different connections for each statement's request. That's sort of the idea behind the "pool" ... given that it doesn't necessarily know what is making the request and there could be many requests operating simultaneously. It'd be kind of dumb for a single connection to be reserved for a single request for its entire lifetime, but yeah. Use a transaction. It would be totally out of line for it to swap connections within a transaction. What you're doing should really be within a transaction anyway, as changing configuration parameters could affect other queries from other parts of your code or other requests. The way that's currently written... if your code panics, that would never get changed back on that particular connection! On that note, also be sure that you defer your transaction rolling back if it never finishes (if there's a crash/panic, for example) ... because otherwise it might hang that database connection or another process could adopt that database connection and start making changes to that transaction that'll never be committed, etc. — https://github.com/jinzhu/gorm/issues/1056 |
I noticed in @broady 's code: https://github.com/GoogleCloudPlatform/golang-samples/blob/master/getting-started/bookshelf/db_mysql.go#L82 In the https://godoc.org/github.com/garyburd/redigo/redis#Pool library, they also implement a nice pool. Is there something similar in this driver? |
@pjebs what is the actual current issue? |
Just asking about best practice on when to implement |
Oh, I see. As far as I can tell you shouldn't need to...? But, if there are dead connections still in the pool... it seems necessary. |
You definitely need to Ping. They do it in |
|
We "definitely" need to Ping? That makes no sense... what happens if the connection isn't valid? Abort, or retry? |
Why wouldn't running a query call the equivalent of |
OK, so then why do we definitely need to |
I suppose you could retry, but what is the failure condition you're working around? A bad network connection? |
@benguild |
Yes, occasionally one of the returned pooled connections is bad. Currently I call ping and if ping fails, I explicitly |
@broady, OK that makes more sense... I didn't think it'd keep in [the pool] or release dead connections from the pool to the caller. @pjebs FYI, I know that we're both using if err == nil {
err = db.DB().Ping() // Send a ping to make sure the database connection is alive.
} |
OK, so considering this particular library attempts for databaseConnectionAttemptLoop := 0; databaseConnectionAttemptLoop<4; databaseConnectionAttemptLoop++ {
err=nil; // NOTE: If there is an error, the loop will continue and replace the value for "err" with each attempt. This will continue until the maximum number of attempts have been made... at which point the loop simply exits and the error is returned at the end of the function.
if db, err = gorm.Open(/* database connection string... NOTE: This function attempts `.Ping` on its own. */))); err == nil && db != nil {
/* configure connection */
}
////
if (err==nil) {
break; // Here, if there is no error, it simply breaks out and does not retry again.
}
} Not currently sure what is expected if bad connections can intentionally be returned. |
Currently If Ping fails (usually due to a bad connection), I just do an infinite loop on trying to connect again with a slight |
Hmm, I don't know if I'd do that— |
That's what I'm currently doing for testing. It won't be done in production. I'll put a limit on how many attempts it does. |
Google's SRE book suggests maximum of 3 attempts with randomized exponential back off, with a global per-process retry budget of 10%. If you start failing more than 10% of your calls, stop retrying. |
Oh wow. Thanks @dgryski . Where can I find SRE book? |
https://books.google.com.au/books?id=81UrjwEACAAJ&redir_esc=y |
@dgryski What doyou mean by "global per-process retry budget"? |
@ benguild
use this snippet code, with multiple goruntime, i got error blow
see https://github.com/jinzhu/gorm/issues/1898 Try calling these as soon as my connection is open:
solve it. |
I have same problem with connecting to proxysql. I have set the following configuration for a production node with high load:
But when I check the number of connections for db port it shows more than 21k connections in |
I encountered the same problem, there are more than 20k connections in the |
With this driver, is it best practice to every HTTP request open the database using
Open("mysql","...")
and close it usingClose()
at the end of every requst cycleOR
just open it once and never close it irrespective of the request cycle?
The text was updated successfully, but these errors were encountered: