Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kminion fails to export metrics when a broker is stopping, starting or restarting #253

Open
dattagiri opened this issue Mar 11, 2024 · 7 comments

Comments

@dattagiri
Copy link

We have observed that kminion fails to report every metrics when a kafka broker is starting, stopping or restarting.
Once the kafka process stop or start is completed, it starts reporting the metrics again as expected.

@dattagiri dattagiri changed the title kminion fails to export metrics when a broker is stopping, starting or restarting. kminion fails to export metrics when a broker is stopping, starting or restarting when using kafkaAPI scraping Mar 11, 2024
@weeco
Copy link
Contributor

weeco commented Mar 11, 2024

Hey, are you using the endToEnd reporting? If so, I believe I'm already looking into this (have not yet found a proper solution though, here's the draft PR: #252).

@dattagiri
Copy link
Author

dattagiri commented Mar 12, 2024

We are not using end to end reporting. We are just scraping the consumer lag metrics with adminAPI/offsetsTopic scraping. It's same for both types of scraping Modes.
Even disabling of logDirs is not changing the outcome :(

@dattagiri dattagiri changed the title kminion fails to export metrics when a broker is stopping, starting or restarting when using kafkaAPI scraping kminion fails to export metrics when a broker is stopping, starting or restarting Mar 12, 2024
@weeco
Copy link
Contributor

weeco commented Mar 12, 2024

Okay I need more details then, ideally instructions to reproduce this. This should definitely not happen. Any suspicious log messages?

@dattagiri
Copy link
Author

So far, we haven't seen any suspicious log messages.

This can be reproduced by,

  1. Keep the kafka cluster and kminion running.
  2. Watch for the consumer lag or any metric you want to monitor through prometheus/grafana
  3. Stop the kafka on one broker. This might result in no metrics exposed temporarily(1 or 2 mins, but depends), then it resumes exposing the metrics.
  4. After a while start the kafka service on the broker. This might result in no metrics exposed temporarily(2 or 3 mins, but depends). then it resumes exposing the metrics)

This can be reproduced by just restarting the kafka process too.
Please note that, you won't see this absence of metrics every time kafka process is stopping/starting/restarting. However, most of the times it does happen.
Sometimes, kafka process stop and start is so quick it won't happen due to the prometheus scraping interval.

@dattagiri
Copy link
Author

dattagiri commented Mar 12, 2024

Few more details wrt the kminion logs when one of the broker's kafka service was stopped/started. The scraping mode is offsetsTopic with logDirs disabled.

# kminion logs while broker a1 was being stopped @15:19

{"level":"info","ts":"2024-03-12T15:19:41.787Z","logger":"main.kgo_client","msg":"metadata update triggered","why":"reload offsets from load failure"}
{"level":"warn","ts":"2024-03-12T15:19:43.087Z","logger":"main.kgo_client","msg":"unable to open connection to broker","addr":"broker-a1:kafka-port","broker":"1001","err":"dial tcp broker-ip:kafka-port: connect: connection refused"}
{"level":"info","ts":"2024-03-12T15:19:43.344Z","logger":"main.kgo_client","msg":"immediate metadata update triggered","why":"fetch had inner topic errors: FENCED_LEADER_EPOCH{__consumer_offsets[46]}"}
{"level":"warn","ts":"2024-03-12T15:19:44.020Z","logger":"main.kgo_client","msg":"unable to open connection to broker","addr":"broker-a1:kafka-port","broker":"1001","err":"dial tcp broker-ip:kafka-port: connect: connection refused"}
{"level":"info","ts":"2024-03-12T15:19:45.802Z","logger":"main.kgo_client","msg":"immediate metadata update triggered","why":"fetch had inner topic errors: FENCED_LEADER_EPOCH{__consumer_offsets[1 5 13 14]}"}
{"level":"info","ts":"2024-03-12T15:19:47.132Z","logger":"main.kgo_client","msg":"immediate metadata update triggered","why":"fetch had inner topic errors: FENCED_LEADER_EPOCH{__consumer_offsets[47 48]}"}
{"level":"warn","ts":"2024-03-12T15:19:47.741Z","logger":"main.kgo_client","msg":"unable to open connection to broker","addr":"broker-a1:kafka-port","broker":"1001","err":"dial tcp broker-ip:kafka-port: connect: connection refused"}
{"level":"info","ts":"2024-03-12T15:19:47.995Z","logger":"main.kgo_client","msg":"immediate metadata update triggered","why":"fetch had inner topic errors: FENCED_LEADER_EPOCH{__consumer_offsets[2 6 10 12 15 19 23 27 33 34 35 36 39 40 44]}"}

# kminion logs after broker a1 was started @15:23

{"level":"info","ts":"2024-03-12T15:25:10.675Z","logger":"main.kgo_client","msg":"immediate metadata update triggered","why":"fetch had inner topic errors: FENCED_LEADER_EPOCH{__consumer_offsets[0]}"}
{"level":"info","ts":"2024-03-12T15:25:10.779Z","logger":"main.kgo_client","msg":"immediate metadata update triggered","why":"fetch had inner topic errors: FENCED_LEADER_EPOCH{__consumer_offsets[3 8 16 20 25 29 42 43 49]}"}
{"level":"info","ts":"2024-03-12T15:25:10.826Z","logger":"main.kgo_client","msg":"immediate metadata update triggered","why":"fetch had inner topic errors: FENCED_LEADER_EPOCH{__consumer_offsets[4 7]}"}
{"level":"info","ts":"2024-03-12T15:25:13.746Z","logger":"main.kgo_client","msg":"metadata update triggered","why":"reload offsets from load failure"}
{"level":"info","ts":"2024-03-12T15:25:14.271Z","logger":"main.kgo_client","msg":"metadata update triggered","why":"reload offsets from load failure"}
{"level":"info","ts":"2024-03-12T15:25:17.210Z","logger":"main.kgo_client","msg":"metadata update triggered","why":"reload offsets from load failure"}

@dattagiri
Copy link
Author

Screenshot 2024-03-12 at 15 40 10

Screenshot of conumer lag metrics being absent when above stop and start was happening.

@sszp
Copy link

sszp commented Mar 27, 2024

Further debugging this issue shows that kminion does expose the metrics every time we query it, but while brokers are being stopped it takes more time for kminion to generate the metrics, most likely because of the failing connection to the broker being stopped. The typical 6-7sec response time increases to 15-20sec that's more than the scrape timeout on the Prometheus servers, that's what is causing the missing metrics.

The question is whether it's possible to handle the failing broker connections in a way that it doesn't increase the response time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants