Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a description to Metrics and emit it #3408

Open
oleiade opened this issue Oct 20, 2023 · 4 comments
Open

Add a description to Metrics and emit it #3408

oleiade opened this issue Oct 20, 2023 · 4 comments
Assignees
Labels

Comments

@oleiade
Copy link
Member

oleiade commented Oct 20, 2023

Feature Description

In the context of the k6 dashboard project, we have stumbled upon the need to present a textual description of metrics to users. e.g., as a user, as I hover over the HTTP duration panel, a short description is presented on the screen: "http_req_connecting, time spent establishing TCP connection to the remote host" (the described UX is only there as an example and not representative of what might be delivered.

We will likely start by grabbing and storing the description for each metric ourselves. Still, as @szkiba brought up as we discussed it, it would indeed be very convenient if k6 could emit each metric description itself as part of its output. In a similar fashion as to what Prometheus' scraping /metrics endpoint contains.

Cc @codebien : You often work in that part of the codebase. Thus, this might interest you, and you might have opinions and/or ideas about it.

Suggested Solution (optional)

We have yet to propose a design for it.

Already existing or connected issues / PRs (optional)

xk6-dashboard#83

@codebien codebien self-assigned this Oct 23, 2023
@mstoykov
Copy link
Collaborator

@oleiade can you elaborate on what prometheus does? As I can't find it easily. Maybe also what Otel does if anything?

@codebien
Copy link
Collaborator

codebien commented Nov 2, 2023

@mstoykov I expect what they are requesting is something similar to Help field for Prometheus or Description for OTel.

Regarding the proposal, I'm not very in favor of it. It sounds like it is a visualization issue and I think it should remain confined to the specific context.

Considering the amount of samples we emit on the output adding just one field is not a so transparent amount of data. An alternative could be to have it on our internal type and the extension should use the Get or All methods for fetching them. But we have to keep in mind that Registry access means locking, so based on the nature of the test running (especially after we will have implemented #1321), if the extension doesn't use it properly we may hit issues.

So, I would prefer if we avoid it until we have a higher demand for this feature and we try to resolve this issue for now on the specific domain.

@oleiade
Copy link
Member Author

oleiade commented Nov 2, 2023

Hey folks 👋🏻

I believe @codebien's interpretation is correct. To clarify what we meant, the inspiration came from what is exposed to Prometheus on endpoints being scrapped. The /metrics endpoint exposed by default consists in a series of such statements:

# HELP myapp_processed_ops_total The total number of processed events
# TYPE myapp_processed_ops_total counter
myapp_processed_ops_total 5

We would be interested in metrics "publishing" their description in the same way 👆does somehow.

So, I would prefer if we avoid it until we have a higher demand for this feature and we try to resolve this issue for now on the specific domain.
Indeed, this is not absolutely mandatory on our side either yet, we could live without it for now, and handle the descriptions ourselves.

For the fun and profit of tinkering around the issue (feel free to ignore)

Considering the amount of samples we emit on the output adding just one field is not a so transparent amount of data.

Whenever I think of outputs, my reference is often JSON, which in this case "declares" metrics before samples.
For instance: {"type":"Metric","data":{"name":"http_reqs","type":"counter","contains":"default","thresholds":[],"submetrics":null},"metric":"http_reqs"}.

But I'm also not super familiar with how other outputs emit/declare metrics and samples to their endpoint, such as with Prometheus Remote Write for instance. Thus, I take from your sentence that's it's probably not safe to assume the same "metric declaration" step is part of every outputs? Otherwise my assumption would be that having a description as part of it would be manageable as communicated only once?

@codebien
Copy link
Collaborator

codebien commented Nov 2, 2023

Otherwise my assumption would be that having a description as part of it would be manageable as communicated only once?

I forgot about JSON and potentially other outputs doing something similar. Typically the databases' outputs discover metrics in real time because they are mostly a proxy for time series samples (more or less aggregated).
As an alternative, we might evaluate to delegate, the decision to expose this additional field, on each individual output.

@oleiade For the dashboard, how does it discover new metrics? Are they identified in real time or pre-defined only once in some way?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants