Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why is "prometheus" label required on PrometheusRules? #150

Open
conallprendergast opened this issue Dec 15, 2023 · 6 comments
Open

Why is "prometheus" label required on PrometheusRules? #150

conallprendergast opened this issue Dec 15, 2023 · 6 comments

Comments

@conallprendergast
Copy link

When running in a cluster with PrometheusRules defined, the operator will error when the rules do not contain a prometheus label.
Why is this label required?

@conallprendergast conallprendergast changed the title Why is "prometheus" label required on PrometheusRules Why is "prometheus" label required on PrometheusRules? Dec 15, 2023
@conallprendergast
Copy link
Author

Seems that the prometheus label is commonly used so that prometheus operator can select rules via serviceMonitorSelector:

It may be worth allowing PrometheusRules that do not have a prometheus label

@talal
Copy link
Collaborator

talal commented Dec 18, 2023

The operator also uses the prometheus label as the prefix for the PrometheusRule resources that contains the absence alert rules (see aggregation for more details).

For example, if there is/are PrometheusRule resource(s) in a namespace with the prometheus label with value kubernetes then the PrometheusRule resource created by the operator which contains the corresponding absence alert rules will be called kubernetes-absent-metric-alert-rules.

@conallprendergast
Copy link
Author

Would it make sense to have an optional default prefix for created PrometheusRule resource, for PrometheusRule resources that do not have a prometheus label?
Or possibly just to name the created PrometheusRule absent-metric-alert-rules?

@majewsky
Copy link
Contributor

FWIW, we will have to think about this topic anyway. We have a ticket in our internal backlog concerning alerts that have the thanos-ruler label instead of the prometheus label.

@conallprendergast
Copy link
Author

conallprendergast commented Dec 19, 2023

You could have a flag called "aggregage-rule" or something, that takes a template string. eg by default "{.prometheusrule.metadata.labels.prometheus}-absent-metrics"

This could allow aggregating to one new PrometheusRule per discovered rule, (eg, if we specified "{.prometheusrule.metadata.name}-absent-metrics") rather than one per namespace.

Would also allow custom naming and remove dependencies on any particular labels.

@conallprendergast
Copy link
Author

For your information, I have created a PR #153 that would suit our use-case. Feel free to add to it, change it etc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants