Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Abnormal Response Time Percentiles Reported After a Load Increase #2665

Open
2 tasks done
dilinade opened this issue Apr 3, 2024 · 3 comments
Open
2 tasks done

Abnormal Response Time Percentiles Reported After a Load Increase #2665

dilinade opened this issue Apr 3, 2024 · 3 comments
Labels

Comments

@dilinade
Copy link

dilinade commented Apr 3, 2024

Prerequisites

Description

Issue Description:
I've configured a custom load shape in Locust with a specific pattern. Initially, within the first 30 seconds, load is generated by 72 users and then spikes to approximately 136 users, nearly doubling the load, before returning back to the 72 users. However, after the spike, although the number of users returns to 72 (initial level), the response time percentiles are significantly lower compared to the initial normal load phase. This behavior is unexpected as I anticipate the response times to be similar to the initial phase of the normal load.

Steps to Reproduce:

Set up a custom load shape in Locust with the following characteristics:
Ramps up to 72 users within the first 30 seconds.
Spikes to approximately 136 users, doubling the load.
Returns to a stable load of 72 users after the spike.
Observe the response time percentiles during the spike and after the spike.

Expected Behavior:
Response time percentiles should remain consistent with the initial phase of the normal load throughout the test.
I can confirm that it's not originating from the web server application itself.

Actual Behavior:
Response time percentiles are significantly lower after the spike in user count, despite the number of users returning to normal levels.

Additional Information:

Using constant_throughtput(2) per user.
CURRENT_RESPONSE_TIME_PERCENTILE_WINDOW = 2

image

Command line

locust -f mylocustfile.py

Locustfile contents

import json
from locust import FastHttpUser, TaskSet, task, constant_throughput
from locust import LoadTestShape


class UserTasks(TaskSet):
    @task
    def get_root(self):
        self.client.get("/")


class WebsiteUser(FastHttpUser):
    wait_time = constant_throughput(2)
    tasks = [UserTasks]


class StagesShape(LoadTestShape):
    stages = [
{"duration": 30, "users": 72, "spawn_rate": 30},
{"duration": 60, "users": 136, "spawn_rate": 30},
{"duration": 90, "users": 72, "spawn_rate": 30}
]

    def tick(self):
        run_time = self.get_run_time()

        for stage in self.stages:
            if run_time < stage["duration"]:
                tick_data = (stage["users"], stage["spawn_rate"])
                return tick_data

        return None

Python version

3.8.10

Locust version

2.24.1

Operating system

Ubuntu

@dilinade dilinade added the bug label Apr 3, 2024
@cyberw
Copy link
Collaborator

cyberw commented Apr 3, 2024

Oh. That looks very strange.

  • can you try with the default CURRENT_RESPONSE_TIME_PERCENTILE_WINDOW setting and see if it makes a difference?
  • can you output the stats to a csv file and add it here? (Using --csv filename)

@dilinade
Copy link
Author

dilinade commented Apr 3, 2024

Thank you for the response.

There's a bit difference when it's set to the default value of 10. However, response latencies remain lower than previous levels.

@cyberw
Copy link
Collaborator

cyberw commented Apr 5, 2024

Ok, two things:

  • CSV stats align with the UI, so it is not a presentation issue. Are you completely sure you dont actually HAVE lower response times after the ramp down? Perhaps you have some auto scaling going on?
  • I think there might be an error in the presentation of average response times. It seems the line displays the average for the whole test and not just for the relevant interval. @andrewbaldwin44 can I bother you to take a look?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants