New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prevent excessive logging in certain failure scenarios #10723
Comments
michaelklishin
changed the title
Prevent excessive logging
Prevent excessive logging in certain failure scenarios
Mar 12, 2024
another one from the channel process - this time it's the mailbox, not the state:
|
osiris:
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
This is an umbrella issue for a bunch of small issues.
Some RabbitMQ components can log excessively when they crash or some other relatively unusual event happens (but it can be a common one actually - a cluster node failure, etc).
The goal is to avoid stuff like this in the logs:
or this:
Reproduction steps
I'm running chaos tests to see what kind of situations like that I can trigger. So far there are two:
osiris_replica
can fail and all of its mailbox gets logged. This is potentially a huge piece of binary data coming from a TCP. For example, after running out of disk space I get this:with lots and lots of data later. Moreover this process is continuously restarted and therefore this data is printed over and over as the stream leader keeps trying to send the data to this replica.
pending_ack
stuff above)Expected behavior
General recommendations for logging:
~p
should not be used for log formattingAdditional context
No response
The text was updated successfully, but these errors were encountered: