Proposal to bump persistent_timeout to 65 seconds #3378
+1
−1
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
I noticed that if Puma's
persistent_timeout
is shorter than the load balancer's timeout, occasional minor service disruptions can occur. For instance, consider the AWS Application Load Balancer (ALB) with a default idle timeout of 60 seconds. If ALB forwards a request to Puma just as Puma is closing the socket, it may result in a broken TCP handshake, leading to a TCP RST/FIN being sent back to ALB. ALB would then handle that RST/FIN and return a 502 error to the client, which is not ideal.Since most load balancers, like AWS ALB/ELB and Heroku, have default idle timeouts around 60 seconds and 55 seconds respectively, I am proposing that we increase Puma's
persistent_timeout
to 65 seconds so that the upstream negotiates and closes the connection before Puma does.Your checklist for this pull request
[ci skip]
to the title of the PR.#issue
" to the PR description or my commit messages.