Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Checks executed twice and no recovery notifications are sent #9995

Open
danpicpic opened this issue Feb 8, 2024 · 2 comments
Open

Checks executed twice and no recovery notifications are sent #9995

danpicpic opened this issue Feb 8, 2024 · 2 comments

Comments

@danpicpic
Copy link

Describe the bug

We have observed a couple of times in the last 3 weeks a weird behaviour where the checks are performed twice, the notifications sent twice (at least the Problem one), but at the same time we also saw that no Recovery notifications were ever sent.
Every time it happened in a small time frame (for e.g. between 8am and 9am), on different number of servers/services with no common pattern between them.

The checker and notifications features are enable in HA on both master. On both of them, from the icinga2.log (is it normal that they log the same? are they doing the same action in parallel?) I see the following lines, where a Problem notification is sent but not the Recovery one:

[2024-02-03 08:20:18 +0100] information/Checkable: Checkable 'hostxxx!servicexxxx' has 1 notification(s). Checking filters for type 'Problem', sends will be logged.
[2024-02-03 08:20:18 +0100] information/Notification: Sending 'Problem' notification 'hostxxx!servicexxxx!state-notification-to-service' for user 'dummy_user'
[2024-02-03 08:20:18 +0100] information/Notification: Completed sending 'Problem' notification 'hostxxx!servicexxxx!state-notification-to-service' for checkable 'hostxxx!servicexxxx' and user 'dummy_user' using command 'state-notification'.
[2024-02-03 08:20:18 +0100] information/Checkable: Checkable 'hostxxx!servicexxxx' has 1 notification(s). Checking filters for type 'Problem', sends will be logged.
[2024-02-03 08:43:18 +0100] information/Checkable: Checkable 'hostxxx!servicexxxx' has 1 notification(s). Checking filters for type 'Recovery', sends will be logged.
[2024-02-03 08:43:18 +0100] information/Checkable: Checkable 'hostxxx!servicexxxx' has 1 notification(s). Checking filters for type 'Recovery', sends will be logged.
  1. The first screenshot below is the one linked to the above log. All the messages regarding the notification not sent are weird, as the Problem notification was sent anyway, but not the Recovery.
  2. From the two screenshots we can see how every check/action is done twice or multiple times (soft state, hard state, ok, notifications)

Screenshots

icinga_ss1
icinga_ss2

Your Environment

Include as many relevant details about the environment you experienced the problem in

  • Version used (icinga2 --version): r2.14.1-1
  • Operating System and version: RHEL 9.2
  • Enabled features (icinga2 feature list): api-users api checker command graphite ido-mysql mainlog notification
  • Icinga Web 2 version and modules (System - About): 2.11.4
  • Config validation (icinga2 daemon -C): OK
  • If you run multiple Icinga 2 instances, the zones.conf file:
object Endpoint "master1" {
}
object Endpoint "master2" {
  host = "master2"
}
object Endpoint "satellite1" {
  host = "satellite1"
}
object Endpoint "satellite2" {
  host = "satellite2"
}
object Zone "director-global" {
  global = true
}
object Zone "global-templates" {
  global = true
}
object Zone "master" {
  endpoints = [ "master1", "master2", ]
}
object Zone "satellite" {
  endpoints = [ "satellite1", "satellite2", ]
  parent = "master"
}

Additional context

  • We have migrated our infrastructure from SLES12.5 (Icinga 2.10.3) to RHEL9 (Icinga 2.14.0) around 2 months ago
  • We have also installed jemalloc-5.2.1-2.el9.x86_64
  • At the beginning we only had test servers (of which ~1000 with active notifications) to validate the new Icinga2
  • 3 weeks ago we started to monitor the remaining ~2000 Production servers and upgraded Icinga2 to v2.14.1

We have started to see the error in the last 3 weeks, but we don't know if it was introduced by the last minor update to 2.14.1, or if it was already present since the first migration, but as we had fewer servers and less important, it might have been ignored.

@Al2Klimov
Copy link
Member

Every time it happened in a small time frame (for e.g. between 8am and 9am)

Do those frames correlate with your Director deployments?

@danpicpic
Copy link
Author

Every time it happened in a small time frame (for e.g. between 8am and 9am)

Do those frames correlate with your Director deployments?

We don't use Director. But anyway no, and it also happened other few times with less servers (even one single event at the time)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants