Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce memory usage for long duration tests #2367

Open
msaf1980 opened this issue Feb 3, 2022 · 3 comments
Open

Reduce memory usage for long duration tests #2367

msaf1980 opened this issue Feb 3, 2022 · 3 comments
Labels
evaluation needed proposal needs to be validated or tested before fully implementing it in k6 feature

Comments

@msaf1980
Copy link

msaf1980 commented Feb 3, 2022

Feature Description

Statistic require a lot of memory when run long duration tests. Any way for have statistic and thresholds and reduce memory usage ?

Suggested Solution (optional)

May be a flag (may be cycles), than allow to run N executions (by default is 1) in cycle.
Statistic are flushed on every cycle.

Draft in https://github.com/msaf1980/k6/tree/iterate_loop
But required of change some internals, so maybe not complete.

Already existing or connected issues / PRs (optional)

No response

@na-- na-- added the evaluation needed proposal needs to be validated or tested before fully implementing it in k6 label Feb 3, 2022
@na--
Copy link
Member

na-- commented Feb 3, 2022

We have plans (e.g. #763, #1321) for reducing the memory usage that the built-in Trend metrics require, but I am not sure if something like what you propose is a viable option to include in the k6 core 🤔 At best, maybe we can add a new instance.flushMetrics() method to the k6/execution module that can be called from the script, but I imagine there will be quite a lot of unexpected issues even from that... 😕

For now, the official recommendation for long-running tests is to run k6 with --no-summary and --no-thresholds and stream the metrics to an external output (e.g. json, csv, k6 cloud, etc.) with the --out option. Then k6 won't keep all of the metrics data internally for the duration of the test, it will flush it to the external output at regular intervals. Can you explain why this is insufficient for your use case?

@msaf1980
Copy link
Author

msaf1980 commented Feb 3, 2022

If no thresholds - I can't auto-stop on testing system degradation.
Long duration tests - usually regression tests. So auto-stop is really needed for this cases.

I agree that's my solution is ugly and not a viable option to include in the k6 core. It's example for oen way for reduce memory usage. Use histograms (like #763) is a better solution. And may require minimal refactoring ?

@na--
Copy link
Member

na-- commented Feb 3, 2022

Long duration tests - usually regression tests. So auto-stop is really needed for this cases.

You could monitor the response times in the script by comparing the Response.timings.duration, for example, and use the test.abort() method in k6/execution to stop the test (introduced in the recently released k6 v0.36.0)

And may require minimal refactoring ?

That's the hope, though the last time I looked at it, the Go HDR histogram library was unmaintained, so we might need to fork and polish it... 🤷‍♂️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
evaluation needed proposal needs to be validated or tested before fully implementing it in k6 feature
Projects
Development

No branches or pull requests

2 participants