Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request: allow detecting when hubot-slack has disconnected and will not reconnect #536

Open
4 of 9 tasks
mistydemeo opened this issue Oct 4, 2018 · 7 comments
Open
4 of 9 tasks
Labels
auto-triage-skip enhancement M-T: A feature request for new functionality good first issue

Comments

@mistydemeo
Copy link
Contributor

Description

While hubot-slack has support for reconnecting when the connection to Slack is lost, there is a limit to the number of reconnection attempts. If several reconnection attempts fail, the bot will simply hang without a connection. This causes trouble for our process manager, since the client staying open means we don't end up automatically restart it. When this has happened for us a few times lately, we had to manually notice and restart the service in order to get it back up.

I'd like to see some form of event be emitted in this condition to allow us to determine when this has happened so that we can terminate the process and let the process manager restart it.

What type of issue is this? (place an x in one of the [ ])

  • bug
  • enhancement (feature request)
  • question
  • documentation related
  • testing related
  • discussion

Requirements (place an x in each of the [ ])

  • I've read and understood the Contributing guidelines and have done my best effort to follow them.
  • I've read and agree to the Code of Conduct.
  • I've searched for any related issues and avoided creating a duplicate issue.

Bug Report

Filling out the following details about bugs will help us solve your issue sooner.

Reproducible in:

hubot-slack version: 4.5.5

node version: v8.2.0

OS version(s): Debian jessie

@aoberoi
Copy link
Contributor

aoberoi commented Oct 8, 2018

related: #215

it looks like at some point in this package's history, there was an environment variable used to configure whether the process would terminate in this condition, not just fire an event. should we reintroduce that configuration? fire the event? both?

@aoberoi aoberoi added enhancement M-T: A feature request for new functionality good first issue labels Oct 8, 2018
@ben-nat-wallis
Copy link

It would be really nice to get this back into Hubot. We have a major issue at the moment with our hubot just disconnecting randomly from our slack, and there is no way to detect this so that we can restart the docker container it's running on.

@charliekump-wf
Copy link

charliekump-wf commented May 3, 2019

@ben-nat-wallis

For our hubot, I workaround at the moment with a cron script that checks the logs for the reconnect message. Then kill/restart the container 🤷‍♂

CHECK_STATUS=`docker logs $CONTAINER_ID | grep "INFO Slack client closed, waiting for reconnect" | wc -l

@github-actions
Copy link

github-actions bot commented Dec 5, 2021

👋 It looks like this issue has been open for 30 days with no activity. We'll mark this as stale for now, and wait 10 days for an update or for further comment before closing this issue out.

@mistydemeo
Copy link
Contributor Author

There's been no activity because it was never fixed. I opened this in 2018.

@seratch
Copy link
Member

seratch commented Dec 5, 2021

Hi @mistydemeo, sorry for the false alert. It was not an intentional behavior of the triage bot that we recently introduced recently. I've marked this issue as "auto-triage-skip".

@caiocrivellente
Copy link

Hey guys any update on this issue?

Im having the same problem.

Suddenly hubot loses connectivity to slack and nothing shows on the logs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-triage-skip enhancement M-T: A feature request for new functionality good first issue
Projects
None yet
Development

No branches or pull requests

6 participants