Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added a section for bad monitoring examples #15

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

laurianttila
Copy link

This was discussed in LCM related workshop, feel free to groom the text in case it's awful.

@ilkka
Copy link
Contributor

ilkka commented Feb 7, 2016

A thought: since these "war stories" are a new thing, what do you think about putting them in a .md file or a section of their own, and then linking to them from a shorter text that has the basic problem description and recommendation? That way, in case the war stories include multiple examples of bad practices, they could be linked to from multiple places in the document. Maybe the stories themselves could even have a section that spells out the causes of the problems, and links to the relevant best practices.

@kypeli
Copy link

kypeli commented Feb 7, 2016

These kinds of war stories are really valuable! Would it make sense to link to a separate .md file from this document describing how things can do wrong, but then also add to this document solutions or proposals on how to do monitoring? Pingdom and New Relic kind of services come to mind, that can do "smart alerting" when something happens.

@@ -420,6 +421,30 @@ The load balancer health check page should be placed at a `/status/health` URL.

The status pages may need proper authorization in place, especially in case they expose debugging information in status messages or application metrics. HTTP basic authentication or IP-based restrictions are usually good enough candidates to consider.

## Bad examples of monitoring

When crafting a new service, it’s tempting to create some basic monitoring, like automatically sent emails incase of errors.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in case

@phadej
Copy link
Contributor

phadej commented Feb 7, 2016

war stories: write a blog post and link to it. Also discuss what should have been the right (or at least better) approach. I'm 👎 on merging this as is.

@laurianttila
Copy link
Author

Maybe rewrite this as "word of warning - use email alerts with caution" (with some additional words about hurry/small budget/easy way)? War stories then as a blog post :).

@ilkka
Copy link
Contributor

ilkka commented Feb 8, 2016

I'd like to see something that clearly states that email alerts should be
reserved for high priority, disruptive situations, and care should be taken
to avoid false positives and to throttle them appropriately — otherwise
they either get ignored, or cause more problems downstream with their
volume.

su 7. helmikuuta 2016 klo 20.24 Lauri Anttila notifications@github.com
kirjoitti:

Maybe rewrite this as "word of warning - use email alerts with caution"
(with some additional words about hurry/small budget/easy way)? War stories
then as a blog post :).


Reply to this email directly or view it on GitHub
#15 (comment)
.

Ilkka Laukkanen
Senior Specialist
Futurice Oy

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants