Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Being able to edit metrics emitted by requests before that is done #1716

Closed
mstoykov opened this issue Nov 9, 2020 · 11 comments
Closed

Being able to edit metrics emitted by requests before that is done #1716

mstoykov opened this issue Nov 9, 2020 · 11 comments
Labels
enhancement evaluation needed proposal needs to be validated or tested before fully implementing it in k6 feature new-http issues that would require (or benefit from) a new HTTP API

Comments

@mstoykov
Copy link
Collaborator

mstoykov commented Nov 9, 2020

Currently, if you make an http request it will emit a bunch of metrics and a user can add additional tags before they make the request. What isn't possible is to modify the metrics after the request is finished based on something from the response (for example).

Reported use case:

https://community.k6.io/t/track-transmitted-data-per-url-with-http-batch/1054
Another user has a machine identifier in the response and would like to tag requests with the machines they come from (and they are not exactly 1-to-1 relationship to an IP)

Feature Description

Have a way to edit metrics emitted by requests before they are begin emitted but after the request have finished.

Suggested Solution (optional)

The proposed solution is one more parameter to Params

http.request("GET", "https://test.k6.io", null, {
  preMetricEmission: function(response, metricObject) {
    metricObject["somethingCool"] = response.json()["somethingCool"];
  }
}

With the above code preMetricEmission(another proposed name was postRequest) must be called after the request is done but before we emit the metrics (after they are calculated though). The provided objects are http.Response (possibly with some things missing?) and metricObject should be a json representation of the metrics and maybe have some additional fields/methods.

This also immediately gives a way of dropping unwanted http metrics by just removing them from the metricObject. This also means that you can edit the tags for each of the metrics which means that k6 should probably make it impossible to remove tags that are required by another part of k6.

Another thing against this is that ... well the metrics won't be emitted until the function returns, also what should be the behavior when the function throws an exception?

Obviously this also will be called multiple times if there is a redirect which will also help with this user case.

The good part is that apart from those this will be fairly straightforward to be added in the current codebase ... probably :D

@mstoykov mstoykov added enhancement feature evaluation needed proposal needs to be validated or tested before fully implementing it in k6 labels Nov 9, 2020
@na--
Copy link
Member

na-- commented Nov 9, 2020

The good part is that apart from those this will be fairly straightforward to be added in the current codebase ... probably :D

I am not so sure. Given that the callback will need to be executed in the same JS runtime as the other user code, one big potential issue are http.batch() calls, where we'd need to add some locking so that only a single callback is ever called at one time.

Also, currently the metric emission happens in here: https://github.com/loadimpact/k6/blob/012f1aab2444677cd4a7398e612a534ba51d3f3d/lib/netext/httpext/transport.go#L169

A place that's very far from the JS logic and any callbacks that would need to be called... And the actual metric tag values (which should probably be available in the callback) are generated just before that. So some significant refactoring may be necessary to implement this... 😞

@mstoykov
Copy link
Collaborator Author

Both of those ... seem pretty straightforward to fix ... to me :)

@na--
Copy link
Member

na-- commented Nov 10, 2020

Maybe, I hadn't actually spent any time looking into how to solve these and any other potential issues yesterday, I just commented on why the task might not be as straightforward as you mentioned. I didn't say the obstacles are insurmountable, just that they exist... And if you have a good idea how to work around them, sharing these details would be much more useful for whoever ends up implementing this than just sharing "nah, easy peasy for me" 😉

FWIW, looking at it again, it'd probably be sufficient to add the ability to pass a normal Go callback to the tracerTransport in httpext. Then the HTTP code in /js can just pass a lambda that calls the JS callback from http.request(), while http.batch() can pass a lambda that does the same with some extra locking?

@sniku
Copy link
Collaborator

sniku commented Nov 10, 2020

This feature is also needed to implement tagging of failed requests.

Example:

http.get('https://test.k6.io/', null, {
  metricInterceptor: (response, metricsObject) => {
    let requestFailed = response.status >= 399 || response.status === 0;
    metricsObject["http_req_duration"].tags['hasFailed'] = requestFailed;
  }
});

The UX needs some polishing 😄

@na--
Copy link
Member

na-- commented Nov 10, 2020

@sniku, would you say that such a feature would satisfy your requests in #1311? I imagine to make full use of it would require the new HTTP API, or a wrapper around the current one, so you don't have to specify the callback manually for every request. But other than that, do you think something would be lacking?

Personally, I think it will also somewhat reduce the need for implementing a more comprehensive metric filtering functionality like #1321, we should just make the callback API in a way that would allow users to completely silence metrics from some requests. The downside is that we'd probably have to implement such a callback API for every other protocol API we support eventually, but that's probably ok.

@sniku
Copy link
Collaborator

sniku commented Nov 12, 2020

would you say that such a feature would satisfy your requests in #1311?

If we make it possible to specify a default interceptor then it will satisfy #1311, possibly better than the original proposal.

Here's what I'm currently thinking it should look like

import { sleep } from 'k6'
import { Rate } from 'k6/metrics'
import http from 'k6/http'

let http_req_errors = new Rate('http_req_errors') // redundant metric for demonstration purposes

export let options = {
  thresholds: {
    // both thresholds essentially do the same thing.
    http_req_errors: ['rate < 0.1'],
    "http_req_duration{hasFailed:true}": ['rate < 0.1'],
  },
  metricInterceptors: {
    http: (response, metricsObject) => { // default http interceptor
      let requestFailed = response.status >= 399 || response.status === 0;
      metricsObject["http_req_duration"].tags['hasFailed'] = requestFailed;
      http_req_errors.add(requestFailed);
    }
  }
}

export default function() {
  http.get('https://test.k6.io/', null, {
    metricInterceptor: (response, metricsObject) => {
      // overriding the default interceptor 
    }
  });

  sleep(1)
}

The UX of attaching tags to metrics should be improved if possible, but I couldn't come up with a nicer way of doing it.

@na--
Copy link
Member

na-- commented Nov 12, 2020

Hmm I'm firmly against a global option for this. One major reason is that it's going to be hell to implement - the exported script options must be able to be serialized to plain JSON data for k6 archive and k6 cloud (and for other internal k6 implementation reasons). And we have enough issues with the configuration already (#883), even if there's a way to implement this sanely (which I doubt), we probably shouldn't...

Another big reason is that the UX is poor. Pretty much every global k6 option has proven itself to be insufficient to handle complicated use cases, especially the HTTP-related ones. What would a user do if they want to tag metrics from different scenarios (or parts of scenarios) differently? A global metricInterceptors will quickly devolve into a spaghetti mess of if checks and branches, and even that might not be sufficient to handle all use cases...

Instead of a default global "interceptor" (not sure I like the name 😅), the callback function should be a part of the new HTTP API's Client class. It will be much more flexible (you can have 1 or many or even change the callback mid-test) and easier to implement...

@mstoykov
Copy link
Collaborator Author

@sniku you can't have functions as part of the options but I guess it will be fine if it is the name of an exported function? The same way it is for scenarios.exec

Other than ... I do think this will probably be required even if we have another way to surpress metric emission, and the ones proposed at least of the time seemed (to me again) a lot less straightforward.

My biggest concern with this proposal is that:

  1. people can technically save the objects in that lambda to be used outside of the lambda ... and that probably is going to be pretty bad idea, as at least the response can possibly change between those. Also changing metricsObject after the return of the funciton will probably be ... racy.
  2. metricsObject will need to be very well designed and will need to have some safeguards against people removing parts that are fox example neede in the cloud output.

Additionally metricsObject( at least for http) should probably have a single tags field instead of mutliple ones as I am pretty sure how that works internally either way.

@mstoykov
Copy link
Collaborator Author

I also somewhat agree with @na-- that global options have not been great, and maybe we should leave that for a second iteration on this feature if we decide to implement it with the current HTTP API and definetely leave it out for the new HTTP API.

But I am of the opinion that the first iteration should probably include the current HTTP API, as the new one is nowhere inside and the code for this will likely mostly be around metricsObject implementation which will be easily transferable, while still solving (or atleast helping) with a not small number of problems that are currently unsolvable.

@na--
Copy link
Member

na-- commented Nov 12, 2020

I am not completely opposed to adding such a callback in the current http.Params object, i.e. adding a per-request callback. Making a wrapper around k6/http that can take a default interceptor and pass it to individually to every single request is also not going to be difficult.

The only downsides I can see are that the more we add to the current HTTP API, the more we'll delay writing the new one, and the more stuff we'll have to actually port when we finally bite the bullet... The facts are that there simply are some issues we know we can't solve with the current API at all, due to poor design, so we know we need a new one. The more we delay, the harder that project becomes...

@imiric
Copy link
Contributor

imiric commented Mar 28, 2023

We're unlikely to address this in the current HTTP API, but it will be possible in the new API (initial design document). We haven't decided on the syntax yet, but we'll take the discussion above into consideration.

I'll close this issue, as the design of this, and other features, is continued in #2971.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement evaluation needed proposal needs to be validated or tested before fully implementing it in k6 feature new-http issues that would require (or benefit from) a new HTTP API
Projects
None yet
Development

No branches or pull requests

4 participants