-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Prometheus Endpoint #898
Comments
I know that the main resources are focused on developing the basic functionality of the project. But it would really be great to have a Prometheus endpoint to collect metrics. |
@19wolf @konstantin-921 do you use LizardFS commercially and your businesses have a need for such a feature? Just curious about this need, maybe we'll put that on the roadmap :) |
Yes, we use LizardFS in commercial development. Right now there is a need to add monitoring for LizardFS. We use prometheus exporters or integrated endpoints to collect any metrics. It would be cool if you could add an endpoint for prometheus metrics :) |
I do not use LizardFS commercially, and in fact no longer use it at home either (for now), but if I were to use it at work I would definitely need this |
So far I have added "crutches". I'm running every minute requests via cron using lizardfs-admin. My cron job looks like this:
Then I use Node Exporter Textfile Collector - https://github.com/prometheus/node_exporter#textfile-collector to handle this metrics. It's not the most convenient setup, but at least it works. |
@konstantin-921 I've sent you an email about this. Please, check your inbox :) |
For some reason I don't see any new emails from you. Even the spam folder is empty. My email is konstantin-921@yandex.ru. |
@konstantin-921 I've pinged you :) The e-mail address is correct. I am sending it from anton.borecki@lizardfs.com. |
I’d love to have an endpoint for Prometheus to scrape data from, such as chunks health, server usage, and other metrics available in the cgi-server.
The text was updated successfully, but these errors were encountered: