Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Allow configuration of a rule evaluation delay #14061

Open
wants to merge 16 commits into
base: main
Choose a base branch
from

Conversation

gotjosh
Copy link
Member

@gotjosh gotjosh commented May 7, 2024

Now that Prometheus is allowed to be a remote write target, we think this is highly useful.

This basically enables Prometheus to do two things:

  1. Allow setting a server-wide rule evaluation delay.
  2. Allow the overriding of such a setting at a rule group level.

The main motivation of this work is to allow remote write receivers (in this particular case Prometheus itself) to delay alert rule evaluation by a certain period to take into account any sort of remote write delays.

In the early days of Mimir, our customers observed plenty of gaps in recording rules and flapping alerts due to remote write delays. Network anomalies are a factor of when and not if and this change help mitigate most of that flapping. In Mimir, we run it with a default of 1 minute but for Prometheus I'm proposing a default of 0 to avoid any sort of breaking change and leave it mostly as a tool for operators to decide how do they want to balance out time of evaluation for recording rules and alerts vs tolerance for remote write delays.

For context, this has been running in Mimir for years.

** Note for Reviewers **

I've tried adhere as much as possible to the original author's code where it made sense -- however, there was a huge refactor on the rules package since this code was introduce so a lot of merge I had to manually port it to the right place so please take a look carefully.

Copy link
Contributor

@pracucci pracucci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we update the documentation here and here, to mention the new field in the rules config file?

cmd/prometheus/main.go Outdated Show resolved Hide resolved
Copy link
Contributor

@pracucci pracucci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I compared the logic with the one we use in Mimir and I haven't seen any difference, so LGTM.

rules/rule.go Outdated Show resolved Hide resolved
@gotjosh
Copy link
Member Author

gotjosh commented May 9, 2024

Should we update the documentation here and here, to mention the new field in the rules config file?

I have updated the docs in here as this is where the have the reference to the file format.

@gotjosh gotjosh marked this pull request as ready for review May 9, 2024 10:58
@gotjosh
Copy link
Member Author

gotjosh commented May 9, 2024

@juliusv @roidelapluie @bwplotka @beorn7 @ArthurSens any thoughts?

@prymitive
Copy link
Contributor

rule evaluation delay to me sounds like the rule query won't be executed immediately but rather after a while. But this change, I think, is to use a constant offset when querying metrics during rule evaluation. How about rule evaluation offset ?

Copy link
Member

@juliusv juliusv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rule evaluation delay to me sounds like the rule query won't be executed immediately but rather after a while. But this change, I think, is to use a constant offset when querying metrics during rule evaluation. How about rule evaluation offset ?

Yeah, from reading the PR description and the flag help string, I also thought this would just pause the rule manager initially and then start executing rules as normal. But as far as I can see, it's not actually initially pausing at all, but it's changing every query to run at a past timestamp and then store it at a past timestamp.

The closest analogy would be the offset modifier we have for selectors and subqueries in PromQL. Maybe the feature should be called "rule query offset" or something like that?

cmd/prometheus/main.go Outdated Show resolved Hide resolved
@@ -86,6 +86,9 @@ name: <string>
# rule can produce. 0 is no limit.
[ limit: <int> | default = 0 ]

# Delay rule evaluation of this particular group by the specified duration.
[ evaluation_delay: <duration> | default = rules.evaluation_delay ]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm not mistaken, this would introduce the first config item that overrides a command-line flag... at least the other per-rule-group settings only override settings from the global configuration file. Makes me wonder:

  • Do we need this per group at all (at least initially, is someone asking for it already?)?
  • If so, would it be better to make this a global config option rather than a flag, to be more in line with what we do already?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this per group at all (at least initially, is someone asking for it already?)?

From experience - this has proven significantly useful.

The argument was that we want to have a lower evaluation delay for all of metrics to safeguard for remote write delays but we want an even bigger one (or no delay due to its time sensitivity) for our "system rules" as those monitor 100s of 1000s of things were fast positives trigger an alert storm. With the same argument being in reverse, we don't want any delay for all of our metrics but we want this particular group to have a delay because where the remote write source where they come from tends to lag behind sometimes due to its size.

If so, would it be better to make this a global config option rather than a flag, to be more in line with what we do already?

I can see a world where you might want to control the flag at a server level - perhaps what I should introduce is a guard to make sure the config option cannot be lower than the server flag which I think it's what makes sense in my head.

CHANGELOG.md Outdated Show resolved Hide resolved
@gotjosh
Copy link
Member Author

gotjosh commented May 16, 2024

The closest analogy would be the offset modifier we have for selectors and subqueries in PromQL. Maybe the feature should be called "rule query offset" or something like that?

I don't have a strong opinion on name and I'm happy to go whichever direction you think it's right - Just want to say that a rename would be a big source of pain for us as we'll have to deprecate flags and change the config file format but that's not a Prometheus problem.

Just want to make sure the whatever name we end up picking you feel strongly about it - and it's not one of those "it would be nice if".

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
@roidelapluie
Copy link
Member

I am on Julius' side, while Reading the PR I was thinking it was slipping evals for a certain time, not evaluating in the past.

@juliusv
Copy link
Member

juliusv commented May 16, 2024

I don't have a strong opinion on name and I'm happy to go whichever direction you think it's right - Just want to say that a rename would be a big source of pain for us as we'll have to deprecate flags and change the config file format but that's not a Prometheus problem.

Yeah, I understand an am sorry for the pain, but I also believe it's worth to make a user-facing option as clear as possible. Even users who don't need it will have to read it in the docs and try to understand what it does and whether they need it.

I think even "offset" alone isn't fully clear, as that could just mean an offset in when rule evaluations are triggered, without changing the actual timestamp of queries into the past. So that's why I'd suggest rule_query_offset / ruleQueryOffset. This can just be queryOffset within the rules package and query_offset for the per-group option, as the rules context is clear there.

I can see a world where you might want to control the flag at a server level

Not sure I follow 100% - whether it's a flag or a global config option, both would control the option at a server level. A config option would just be more consistent in terms of how our existing configuration overrides work: I don't think we have a flag that can be overridden by any config option anywhere, but we have global config options that can be overridden in more specific places (e.g. rule evaluation intervals). That's just why it'd feel more natural to me to have the rule query offset global default setting in the config file as well (vs. as a flag). The other implication of putting something into the config vs. flag is that the setting has to be reloadable during runtime, which I hope is doable for the global rule query offset.

perhaps what I should introduce is a guard to make sure the config option cannot be lower than the server flag which I think it's what makes sense in my head.

I could imagine both configuring lower and higher offsets for specific groups, depending on whether most of your data comes from a regular source or from a delayed one (then you'd choose the default offset to be according to that and either configure lower or higher offsets for specific exceptional groups that use data from a different source).

@gotjosh
Copy link
Member Author

gotjosh commented May 16, 2024

Yeah, I understand an am sorry for the pain, but I also believe it's worth to make a user-facing option as clear as possible. Even users who don't need it will have to read it in the docs and try to understand what it does and whether they need it.

Sounds good to me!

I think even "offset" alone isn't fully clear, as that could just mean an offset in when rule evaluations are triggered, without changing the actual timestamp of queries into the past. So that's why I'd suggest rule_query_offset / ruleQueryOffset. This can just be queryOffset within the rules package and query_offset for the per-group option, as the rules context is clear there.

This makes sense to me, of all the options, I do like rule query offset / query offset the most. I'll make this change.

Not sure I follow 100% - whether it's a flag or a global config option, both would control the option at a server level. A config option would just be more consistent in terms of how our existing configuration overrides work: I don't think we have a flag that can be overridden by any config option anywhere, but we have global config options that can be overridden in more specific places (e.g. rule evaluation intervals). That's just why it'd feel more natural to me to have the rule query offset global default setting in the config file as well (vs. as a flag). The other implication of putting something into the config vs. flag is that the setting has to be reloadable during runtime, which I hope is doable for the global rule query offset.

🤦 Sorry I misunderstood your argument entirely - yeah, I think this change makes perfect sense. The pattern of global config can be overrideable via a more specific configuration option of a component is very common in Prometheus world.

I could imagine both configuring lower and higher offsets for specific groups, depending on whether most of your data comes from a regular source or from a delayed one (then you'd choose the default offset to be according to that and either configure lower or higher offsets for specific exceptional groups that use data from a different source).

Sounds good - I'll ship it without the guard and we can evaluate it as i evolves.

codesome and others added 9 commits May 16, 2024 12:12
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
…e it a global configuration option.

Signed-off-by: gotjosh <josue.abreu@gmail.com>

E Your branch is up to date with 'origin/gotjosh/evaluation-delay'.
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Signed-off-by: gotjosh <josue.abreu@gmail.com>
@@ -86,6 +86,9 @@ name: <string>
# rule can produce. 0 is no limit.
[ limit: <int> | default = 0 ]

# Offset the rule evaluation of this particular group by the specified duration.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I use slightly different wording here as this is the most specific one - instead of saying "initial delay.." I word it in the context of the rule groups.

@@ -90,6 +91,7 @@ type GroupOptions struct {
Rules []Rule
ShouldRestore bool
Opts *ManagerOptions
RuleQueryOffset *time.Duration
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All public methods/attributes get the Rule prefix.

Signed-off-by: gotjosh <josue.abreu@gmail.com>
@gotjosh
Copy link
Member Author

gotjosh commented May 20, 2024

@juliusv @roidelapluie I think I have addressed all your concerns now. Let me know if you have any other thoughts.

You can look at the last 4 commits if you'd like to know what changes since you last (the force push was a result of some tests that changed in main for manager_test.go)

Copy link
Collaborator

@machine424 machine424 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should make it explicit that with such offset, on the shutdown of an instance an extra offset amount of data will not be considered for evaluation. (It's not a magic solution for 0 gap)

return *g.queryOffset
}

if g.opts.DefaultRuleQueryOffset != nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we add some pre checks to ensure that we'll be able to append that "old" data?
And some pre-check regarding the eval_interval? (delay<eval_interval maybe?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants