Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K6 Remote write took 5.1603032s while flush period is 1s. Some samples m ay be dropped. #36

Closed
perrinj3 opened this issue Aug 25, 2022 · 2 comments
Labels
bug Something isn't working

Comments

@perrinj3
Copy link

Brief summary

When using the K6 Prometheus Remote Write Extension samples are being dropped under loads of 700TPS
No custom tags are being used and Prometheus doesn't appear to be under CPU or Memory stress
The same problem occurs if we remote write to Mimir.

k6 version

k6 = v0.38.0 extension = v0.0.2

OS

Windows

Docker version and image (if applicable)

No response

Steps to reproduce the problem

Simple K6 test using the K6 prometheus remote write extension.

import http from 'k6/http';
import { sleep } from 'k6';

export default function () {

  let res1 = http.get('http://simple apache endpoint',);
  sleep(.0001)
}
{
    "stages": [
		{
		"duration": "10s",
		"target": 5
		},
		{
		"duration": "600s",
		"target": 5
		},
		{
		"duration": "10s",
		"target": 1
		}
			],
  
  "noConnectionReuse": true,
  "userAgent": "MyK6UserAgentString/1.0"
}

Expected behaviour

Samples should be written out within 1 sec flush period for the tested TPS. We would like to run error free at 1200TPS

Actual behaviour

WARN[0432] Remote write took 5.1603032s while flush period is 1s. Some samples may be dropped. nts=150005

@perrinj3 perrinj3 added the bug Something isn't working label Aug 25, 2022
@na--
Copy link
Member

na-- commented Aug 25, 2022

this issue was originally reported in the main k6 repo, but it seems to be for https://github.com/grafana/xk6-output-prometheus-remote, so I'll move it there

@na-- na-- transferred this issue from grafana/k6 Aug 25, 2022
@codebien
Copy link
Contributor

codebien commented Sep 1, 2022

Hi @perrinj3,
we are already tracking this issue in #10.

When we will merge #38 it should help in reducing this load.

Unfortunately, with the current implementation, skipping tags as suggested here could be the unique option in reducing the amount of data to deliver.

We are actively working on the main issue on the k6 side for resolving the root issue, so we will be able to provide a better-aggregated view of the different metrics.

I close this issue, feel free to re-open or add more observations directly in #10.

@codebien codebien closed this as completed Sep 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants