Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom Threshold Rules #2668

Open
2 tasks done
ktchani opened this issue Apr 7, 2024 · 3 comments
Open
2 tasks done

Custom Threshold Rules #2668

ktchani opened this issue Apr 7, 2024 · 3 comments

Comments

@ktchani
Copy link

ktchani commented Apr 7, 2024

Prerequisites

Description

I've worked pretty extensively with the Locust framework. Locust is a fantastic bare bones (lightweight) framework. I really enjoy using it, and kudos to all the contributors. That being said, it seems to me (unless I'm missing something), that it is very pass/fail oriented. It does not have out of the box solutions for setting/applying custom failure rules. What is considered a failure is very different depending on the use case. Therefore, I believe that the pass/fail is insufficient.

As an example:
If I have a test that hits an endpoint: some_example/resource?id=123

There are no options to specify what my expectations of this endpoint are. If the endpoint responds in 2000 ms, I might very well consider that a critical failure. Currently, my options are as follows:

  1. Look at the html report and see if there is any regression.
  2. Save the raw json data to some data store, and keep historical records. There I can set up various tooling to alert me upon regression, etc.

I suggest implementing an ability to:

  1. Define rules, which would take args like - attribute, operator, threshold. As an example: attribute=average response time, operator=greater than, threshold=2000ms.
  2. Enable specific rules on specific tests. The idea here, is that there will be an output at the end of my test runs that I can then use automate things like sending notifications, retrying tests to see if it is transient, etc.

A basic approach would be the extend the request method to accept an optional rule arg and process the rules downstream. The actual rule application would need to occur at the end of the test run since attributes could consist of multiple test run aggregations such as averages. I found it difficult to understand how the output json is generated. Ideally, this would just enhance that output.

@cyberw
Copy link
Collaborator

cyberw commented Apr 7, 2024

Hi!

there are a couple features that at least solve parts of your problem:

  1. Mark the response as failed based on response time: https://docs.locust.io/en/stable/writing-a-locustfile.html#validating-responses

  2. Set exit code of the locust process based on checking some metric: https://github.com/SvenskaSpel/locust-plugins?tab=readme-ov-file#command-line-options

I do like your idea of rules/thresholds that would mark the request as failed (or maybe a third state: ”ok but with failed rule”) if it can be implemented cleanly, so if you or someone else would like to make a PR I would definitely consider it. Not sure it would be easy to do though..

@ktchani
Copy link
Author

ktchani commented Apr 7, 2024

  1. Yea, I'm familiar with that functionality. It's great, but not really streamlined. Passing a rule should essentially execute something similar but ALSO save it to be a part of the JSON data at the end.
  2. That is cool, I haven't looked at the locust-plugins package. Maybe what I'm proposing is safer as a plugin (to leave Locust clean). I'll have to look into the code and see how it interacts with Locust.

Thanks for a quick response. I'll try to take a look this week and maybe come up with a PoC.

In the mean time, should I mark this issue as closed, or keep it up for discussion?

@cyberw
Copy link
Collaborator

cyberw commented Apr 8, 2024

If it can be done in a plugin then that is nice, but I dont mind having this in core (if it can be done cleanly)

You can leave it open!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants