Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add timeout to RMQ #87

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from
Draft

Add timeout to RMQ #87

wants to merge 1 commit into from

Conversation

kasesalum
Copy link

@kasesalum kasesalum commented Jun 30, 2023

Requests taking too long and never timing out could cause incidents,
hence adding a configurable timeout. In case the timeout isn't added,
no timeout is enforced.

Copy link
Member

@indrekj indrekj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing the reason why and this may break things

@kasesalum kasesalum force-pushed the ENG-227-request-timeouts branch 2 times, most recently from b17b7aa to b433348 Compare July 4, 2023 12:52
Requests taking too long and never timing out could cause incidents,
hence adding a configurable timeout. In case the timeout isn't added,
no timeout is enforced.
@kasesalum
Copy link
Author

@indrekj Added the reason, changed the code in a way that should not affect its current usage. I'm having trouble creating tests for it, though. Do you have any suggestions?

# end
# end
def respond_to(destination, &callback)
def respond_to(destination, options = {}, &callback)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

options is not documented

@indrekj
Copy link
Member

indrekj commented Jul 4, 2023

I'm still not convinced that having this kind of high-level timeout is a good idea. See:

Maybe if it's large enough and is triggered on very exceptional cases, it could be ok.

cc @sviik

@kasesalum
Copy link
Author

@indrekj Siim is on vacation for 2 weeks. Do you maybe have any suggestions on how to better approach the ticket?

@indrekj
Copy link
Member

indrekj commented Jul 5, 2023

Hm... If we would go with a timeout then that has to end up as an error and also in the sentry (and opsgenie). The current "logger.warn" will get ignored and if we get into a corrupted state then that would be really bad. We probably would need to make sure the error reaches the actual application and then the application can send it to Sentry.

I also looked into Rack. We don't have a timeout there either.

Rack has "Rack::Timeout" module, but it also warns about the corrupt state. One solution to avoid that is to restart the whole process on timeout.

I'm leaning toward not adding the timeout at all. Or if you do, then let's start on the application level, instead of adding it to the library immediately. And we probably should restart the process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants