Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API does not return detailed or per-request metrics #1875

Closed
benc-uk opened this issue Mar 1, 2021 · 3 comments
Closed

API does not return detailed or per-request metrics #1875

benc-uk opened this issue Mar 1, 2021 · 3 comments
Labels

Comments

@benc-uk
Copy link

benc-uk commented Mar 1, 2021

Feature Description

The REST API provides details of metrics for the currently running test. These metrics do not include any data for each of the requests or groups in the test.

These per-request metrics are available in the output (e.g. CSV) where you can see the request names, tags etc and the metrics for each one, but there is no way to access these via the API.

The use case would be for exporters or other tools to have real time access to the details of test requests.

E.g. I have created a Prometheus exporter for k6 but without access to the metrics of each request, it's of limited use.
https://github.com/benc-uk/k6-prometheus-exporter/

Suggested Solution (optional)

New API endpoints /v1/metrics/requests and /v1/metrics/requests/{request_name}
which could return a JSON list of requests AND for each request, the metrics

{
  "requests": [
    {
      "method": "GET",
      "name": "Get example page",
      "status": "200",
      "timestamp": "1614620816",
      "url": "https://example.net",
      "metrics": [
        {
          "type": "metrics",
          "id": "http_req_waiting",
          "attributes": {
            "type": "trend",
            "contains": "time",
            "tainted": null,
            "sample": {
              "avg": 93.606275,
              "max": 94.1196,
              "med": 93.5213,
              "min": 93.2629,
              "p(90)": 93.99582000000001,
              "p(95)": 94.05771
            }
          }
        }
      ]
    }
  ]
}
@na--
Copy link
Member

na-- commented Mar 2, 2021

This will very likely never be added to the built-in REST API, sorry.

For one thing, it there's nothing that "consumes" the generated metric samples, it will be one giant built-in memory leak. We already have a small taste of that, because of the current Trend details we keep in memory to calculate the end-of-test summary, thresholds, and indeed, the current REST API stats you already use. This will be much worse, because we'd be keeping a bunch of details in memory, like the metrics tags, not simple float64 numbers. For more details see the discussion in #763 and the connected issues.

So, this has to be done as an output (https://k6.io/docs/getting-started/results-output#external-outputs). Outputs are explicitly enabled with the --out parameter and don't incur any CPU and memory costs when they are not used. And while Prometheus remote write (#1761) will probably be simpler, there's nothing actually stopping an output from spinning up its own HTTP server to serve metrics, so that Prometheus can poll them (#858, #1787).

And, as you've correctly surmised in #1730 (comment), the upcoming output extensions in k6 v0.31.0 can be used as a first step here. We will probably adopt some Prometheus/OpenMetrics output in the k6 core eventually, just because it's so popular. But the first iteration will be best done as an extension.

@benc-uk
Copy link
Author

benc-uk commented Mar 2, 2021

Thanks @na-- I was thinking that API would only return the events for current moment in time the API is called, rather than filling a huge in-memory buffer. A "snapshot" approach like this works well for something like Prometheus which scrapes at intervals.

I'm not familiar enough with the internals of k6 if this approach is feasible without filling up a buffer.

Anyhow I think the output extension is a good route to investigate for this
Thank you

@na--
Copy link
Member

na-- commented Mar 2, 2021

I'm not familiar enough with the internals of k6 if this approach is feasible without filling up a buffer.

There would have to be some kind of a buffer, for sure. Or, at least, some time/count window, so we can have some sort of a bounded/circular buffer. But then we'd have to have some way of configuring it, and it'd still have some performance impact, even if it's not huge.

As I said, the nice thing about outputs (and output extensions) is that they don't have costs if you don't enable them, and they are flexible enough for anything that has to do with metrics, so that's the way to go here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants