Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic counters #1900

Open
ofauchon opened this issue Mar 11, 2021 · 6 comments
Open

Dynamic counters #1900

ofauchon opened this issue Mar 11, 2021 · 6 comments
Assignees
Labels

Comments

@ofauchon
Copy link
Contributor

Hi.

I tried to counters "on the fly" :

let itfCounters = {}
function increaseCounter(cnt){
        if (!itfCounters.hasOwnProperty(cnt)){
                console.log("Create counter " + cnt)
                //itfCounters[cnt]= new Counter(cnt)
        }
        itfCounters[cnt].add(1)
}
...

increaseCounter("my_super_counter_123")

But it fais with error :

INFO[0008] Create counter my_super_counter_123
ERRO[0008] GoError: Metrics must be declared in the init context

Could you tell me why it's not allowed ?
And if there is workaround to do this ?

Thanks

Olivier

@na--
Copy link
Member

na-- commented Mar 11, 2021

You can find the rationale for knowing the metrics in the init context in this issue, #1832, as well as some of the many connected issues to it like #1435, #1443, #1321

Even though these issues aren't implemented yet, their implementation depends on us knowing what metrics exist and their types and contents in the init context. Thankfully, that won't be a breaking change, since as you have seen, you can't create custom metrics outside of the init context, even in old k6 version.

All of that said, can you explain more precisely the use case you have that required you to add counters dynamically? It would be useful to know and may influence us on how we implement these things, or there might be an alternative approach.

@ofauchon
Copy link
Contributor Author

Here is an example of use case:

  • start iteration:
    • K6 script make a http request to a 'catalog' webservice to pick one random url ( from a realtime list of active customer websites)
    • Script execute a couple of http sanity requests on this specific target url
    • Counters (ex: HTTP OK/KO, responses > threshold) are updated (both global and hostname-specific counters)
    • sleep x seconds
  • end loop

Note that many concurrents users may pick the same random url.

Olivier

@na--
Copy link
Member

na-- commented Mar 11, 2021

Hmm this use case seems more well suited to having a single Counter metric, but tagging the values inside of it based on the URL and then setting thresholds based on these tags: https://k6.io/docs/using-k6/thresholds#thresholds-on-tags

I am not sure, the current k6 capabilities might still not be enough for what you want, but improvements should probably lie in this direction rather than what you suggest: #1321, #1441, #1313

@hptabster
Copy link

hptabster commented Apr 20, 2022

I would like to understand how I could do something similar for this case. I have WS traffic which comprises a protocol message which has a msg_type in the body. I would like to be able to count these different types in a similar fashion:

      socket.on("message", (data) => {
        const wsBody = JSON.parse(data).body;
        const wsData = wsBody.data;
        log(`WS message received: data=${data}`);
        if (!(wsBody.msg_type in counters)) {
          counters[wsBody.msg_type] = new Counter(wsBody.msg_type);
        }
        counters[wsBody.msg_type].add(1);
        ...
     }

Is there a current capability to be to able to do this?

@na--
Copy link
Member

na-- commented Apr 21, 2022

You cannot create new metrics in the middle of the test and it's unlikely we'll ever support that. Technically, you can currently work around that restriction from xk6 extensions, but we make no guarantees about not breaking that in future k6 releases.

@hptabster, if you know all of the possible msg_type values, I suggest creating the Counters in the init context. And if you don't know them, you can send msg_type as a metric tag, not part of the metric name.

@deepikaUKG
Copy link

deepikaUKG commented Jan 31, 2024

Hi
I have created all my counters in the init context. During the test I update them with some tags. I want to see the counters in summary report wrt tags.. how is that possible without mentioning all the combination in thresholds.

export function createMetricsAndThresholds(options){

    for (let key in options.scenarios) {
        counterAPIPassed[key] = new Counter(`CounterAPIPassed scenario-${key}`);
        counterAPIFailed[key] = new Counter(`CounterAPIFailed scenario-${key}`);

        let durationPerScenario = `http_req_duration{scenario:${key}}`;

        if (!options.thresholds[durationPerScenario]) {
            options.thresholds[durationPerScenario] = [];
        }
        options.thresholds[durationPerScenario].push('avg<500');
        options.scenarios[key].env['MY_SCENARIO'] = key;

    }
}
export function updateCounterForScenario(result,status,tagValue){
    if (status) {
        counterAPIPassed[__ENV.MY_SCENARIO].add(1,{callType:`${tagValue}`});
    }
    else {
        counterAPIFailed[__ENV.MY_SCENARIO].add(1,{callType:`${tagValue}`});
        console.log("API Failed:" +tagValue);
        console.log("Failure response"+ JSON.stringify(result));
        console.log("API failed with " + status + result.response.status +" Error: "+result.response.body);
    }
}

I got the below in summary report

 CounterAPIFailed scenario-positionCreationAndAssignmentED...: 0       0/s
     ✓ { callType:PIQS-GETPosIncumAssignmentCount }..............: 0     

When i mentioned this in thresholds

  thresholds: {
        'CounterAPIFailed scenario-positionCreationAndAssignmentED{callType:PIQS-GETPosIncumAssignmentCount}':['count<1'],

But I have lots of combinationation which are created on the fly. so is there a way to get this counter with tag info

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants