Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k6 v0.21.0 on Windows: wrong iterations count #652

Closed
gunzip opened this issue May 26, 2018 · 3 comments
Closed

k6 v0.21.0 on Windows: wrong iterations count #652

gunzip opened this issue May 26, 2018 · 3 comments
Labels

Comments

@gunzip
Copy link

gunzip commented May 26, 2018

k6 v0.21.0 on windows (downloaded from the zip file on GitHub)

The number of reported iterations is lower by a factor of (- #VU) than the real number of http requests.
You can verify it pointing k6 to something like beeceptor, but I've experienced this behavior steadily, ie using

k6 run -u 2 -d 4s -> reported iterations: 43 (real number of requests: 45 = 43 + 2)

on another (slower) endpoint:
k6 run -u 10 -d 10s -> reported iterations: 42 (real number of requests: 52 = 42 + 10)

@na--
Copy link
Member

na-- commented May 26, 2018

I think that this is because when you use duration (via the -d 4s flag), k6 treats it as a hard stop. Any metric samples that are after the cutoff are ignored. Since the iterations metric is emitted at the end of an iteration, it probably gets cut off.

If you run the script with -i/--iterations instead of -d/--duration, the numbers of http requests and iterations should match (assuming 1 request per iteration), since there's no hard stop then - k6 would wait for the last iteration to finish.

@robingustafsson, @luizbafilho: I'm not sure if the current behavior is the best one. Maybe we just have to document it better, but maybe it makes more sense to emit the iterations metric at the start of an iteration instead of at its end?

@na--
Copy link
Member

na-- commented May 29, 2018

@gunzip, I discussed this with @robingustafsson and @luizbafilho and for now we'll leave the iterations metric to be emitted at the end of an iteration, since it's a valid use case and changing it could potentially break someone's code, if they depend on the current behavior. We'll improve the documentation so it's mentioned that only full iterations are counted and that when you run tests with --duration, it's a hard cutoff and some metrics may be ignored.

It's probably also worth mentioning that users who want to count the number of started iterations can easily do so with a simple custom Counter metric that's incremented immediately at the beginning of each iteration.

@na-- na-- added the docs label May 29, 2018
@robingustafsson robingustafsson added this to the v1.0.0 milestone Jul 6, 2018
@na-- na-- removed this from the v1.0.0 milestone Jan 21, 2021
@na--
Copy link
Member

na-- commented Jan 21, 2021

This issue seems to have slipped through the cracks... 😄 k6 has had the gracefulStop and gracefulRampDown options since the new executors and scenarios were introduced in k6 v0.27.0. They make k6 wait for iterations to finish up for some time after they were supposed to end, though, of course, the old behavior could be recreated with gracefulStop: '0s'.

The remainder of this issue falls under #877 (comment), which already has a WIP PR that resolves it (#1769), though it's currently blocked by other things. In any case, I think we can safely close this issue.

@na-- na-- closed this as completed Jan 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants