New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Notify on failure rate #858
Comments
Just to devil's advocate this - If you know the frequency and acceptable number of failures, could you calculate the desired grace period and just use that? |
Yeah, I think that should work at least in some cases.
Let's say a job is expected to run every 5 minutes, but you would like to know when it has not run even once for full 24 hours. Set grace time to 24 hours and done. The other half-serious thing I'd like to suggest: fix your checks to not be flaky! If you get notifications about them flapping up and down, make the notifications loud, obnoxious, hard to ignore, as a reminder you have stuff to fix, people to kick, suppliers to fire, etc. |
That's not how grace time is documented to work. From the docs:
From what I understand, it does not account for checks reporting failures.
Easier said than done 🙂 I'm using healthchecks.io as a low effort monitoring for some of my cronjobs on my personal home server. I don't have much time to invest on it so I need to prioritize. I currently have my backups set to run once an hour, but if some of them fail then it's not mission-critical. I can live with 24 hours old backups if needed. But if there aren't any backups for more than a day, then I need to really start worrying. Same for a nextcloud cron: it's scheduled to run every 5 minutes. If a run fails, then it's not really an issue, as long as not too many fail in a row. |
Right, the setting grace time to 24 hours idea will not work if the job also sends failure signals. When Healthchecks receives a failure signal, it sends notifications immediately, regardless of grace time settings. |
I was just about to crate an exact same issue, but even simpler. My idea was to allow for number of failures "threshold" before marking the job as failed. Until the threshold is crossed mark the job as "warning" like with grace period. Why nor add redundancy in the check call?That model works when the job is repeated internally and send success only when it finishes. However, in our case we try to make sure failures are send explicitly any time something fails. In addition, even if it can be added in a different layer it, it doesn't mean it's a good idea. Lack of a check vs failure is a way different signal IMO. So, why not fix the system to not be flaky?That's an obvious question, even posed here. However, not everything requires a fix. One of the uses we have is to utilize a middleware to monitor internal HTTP services that cannot ping the HC themselves but expose a status endpoint. Real-world scenarioOne of the systems I am monitoring is an NVR. We push a status if the NVR itself but also each camera snapshot being taken. Being notified of cameras snapshot failing is crucial for obvious reasons.... except when cameras are being rebooted for update or as a preventive maintance (don't shoot the messenger, that's what many IP cameras need). In this scenario the reboot is quick (~30s), but if the snapshot happens within that reboot time the failure will be reported and people will be notified. This already started leading to failures being ignored "because it's rebooting so I will wait for another failure". |
This stems directly from healthchecks/healthchecks#858 Healthchecks does not currently support notifications suppression for reported failures. It only supports suppression for no-show cases (i.e. check didn't deliver a signal within grace period).
@cuu508 Is there any chance on this feature making the cut? :) |
I do not have near-term plans to work on this. My vision for Healthchecks is to stay simple and focused on its primary task – notify the user when something does not happen on time. I'd like to avoid or at least delay the common scenario of a tool starting with a simple scope, then slowly adding semi-related functionality until it has everything but the kitchen sink, has complex UI, and lacks cohesion. However: nothing is set in stone, I did say "no" to check auto-provisioning, to first-party Dockerfile, and to other features for a long time, but eventually added them.
Two comments on this:
Side note: Every time I push code to Healthchecks.io I receive a wave of notifications from load balancer servers reporting that some backend nodes have gone down and have recovered. On one hand it's annoying to deal with these notifications on every code deploy, on the other hand it is good to see regular proof that 1) the monitoring system works as expected 2) during each deploy, the backend nodes do indeed recover (don't stay down). I've considered adding some notification suppression during code deploys, but decided against it due to these two reasons. |
I have a case of a check that is a bit flaky these days, but all I care about is that it runs at least sometimes.
Would it be possible to implement something like a threshold of failures for notifications?
Instead of notifying immediately on the first failure, we would compute the percentage of failed jobs over a period of time and alert the user only when this threshold is crossed.
Individual checks would have different thresholds and time periods.
Users may also like to configure the threshold with percents, or absolute numbers (X failures over Y minutes for example).
To keep the current behavior, the default would be alerting if the percentage of failed jobs exceeds 0% (or 0 failures) over a minute.
If this is implemented, we could also improve it later by adding thresholds for integrations. For example, I may want to be alerted by email for every failed job but only by phone call if the failure rate exceeds some higher threshold.
The text was updated successfully, but these errors were encountered: