Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[enterprise-4.12] Issue in file monitoring/configuring-the-monitoring-stack.adoc #57544

Open
pfeifferj opened this issue Mar 22, 2023 · 10 comments
Assignees
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@pfeifferj
Copy link
Member

Which section(s) is the issue in?

https://docs.openshift.com/container-platform/4.12/monitoring/configuring-the-monitoring-stack.html

What needs fixing?

There is conflicting or unclear information with regard to custom alertmanager configs. In one section it is mentioned, that these should be made in a secret resource and in another section, it's a ConfigMap. This should be clarified.

Example:

To configure core OpenShift Container Platform monitoring components, you must create the cluster-monitoring-config ConfigMap object in the openshift-monitoring project.
The Alertmanager configuration is deployed as a secret resource in the openshift-monitoring namespace. If you have enabled a separate Alertmanager instance for user-defined alert routing, an Alertmanager configuration is also deployed as a secret resource in the openshift-user-workload-monitoring namespace. To configure additional routes for any instance of Alertmanager, you need to decode, modify, and then encode that secret. This procedure is a supported exception to the preceding statement.
@bburt-rh
Copy link
Contributor

@pfeifferj - Thanks for taking the time to write up this issue. I am the technical writer for monitoring, and we will investigate and decide how best to resolve the issue.

@simonpasquier or @jan--f - Can you PTAL at this issue and comment about the best way to address it?

@bburt-rh
Copy link
Contributor

@simonpasquier @jan--f - Can you PTAL?

@simonpasquier
Copy link

The first sentence refers to the cluster monitoring operator's configuration which lives in the cluster-monitoring-config ConfigMap. The second sentence refers to the Alertmanager's configuration which lives either in the alertmanager-main secret (openshift-monitoring namespace) or in the alertmanager-user-workload secret (openshift-user-workload-monitoring namespace).

Having said that, we could mention explicitly the names of the Alertmanager secrets in the documentation.

@bburt-rh
Copy link
Contributor

bburt-rh commented May 8, 2023

We are now tracking this as a doc issue here: https://issues.redhat.com/browse/RHDEVDOCS-5300

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 7, 2023
@bburt-rh
Copy link
Contributor

bburt-rh commented Aug 7, 2023

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 7, 2023
@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 6, 2023
@bburt-rh
Copy link
Contributor

bburt-rh commented Nov 6, 2023

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 6, 2023
@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 5, 2024
@bburt-rh
Copy link
Contributor

bburt-rh commented Feb 5, 2024

/lifecycle frozen

@openshift-ci openshift-ci bot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

5 participants