You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am suggesting a new feature, not asking a question
Description
I've worked pretty extensively with the Locust framework. Locust is a fantastic bare bones (lightweight) framework. I really enjoy using it, and kudos to all the contributors. That being said, it seems to me (unless I'm missing something), that it is very pass/fail oriented. It does not have out of the box solutions for setting/applying custom failure rules. What is considered a failure is very different depending on the use case. Therefore, I believe that the pass/fail is insufficient.
As an example:
If I have a test that hits an endpoint: some_example/resource?id=123
There are no options to specify what my expectations of this endpoint are. If the endpoint responds in 2000 ms, I might very well consider that a critical failure. Currently, my options are as follows:
Look at the html report and see if there is any regression.
Save the raw json data to some data store, and keep historical records. There I can set up various tooling to alert me upon regression, etc.
I suggest implementing an ability to:
Define rules, which would take args like - attribute, operator, threshold. As an example: attribute=average response time, operator=greater than, threshold=2000ms.
Enable specific rules on specific tests. The idea here, is that there will be an output at the end of my test runs that I can then use automate things like sending notifications, retrying tests to see if it is transient, etc.
A basic approach would be the extend the request method to accept an optional rule arg and process the rules downstream. The actual rule application would need to occur at the end of the test run since attributes could consist of multiple test run aggregations such as averages. I found it difficult to understand how the output json is generated. Ideally, this would just enhance that output.
The text was updated successfully, but these errors were encountered:
I do like your idea of rules/thresholds that would mark the request as failed (or maybe a third state: ”ok but with failed rule”) if it can be implemented cleanly, so if you or someone else would like to make a PR I would definitely consider it. Not sure it would be easy to do though..
Yea, I'm familiar with that functionality. It's great, but not really streamlined. Passing a rule should essentially execute something similar but ALSO save it to be a part of the JSON data at the end.
That is cool, I haven't looked at the locust-plugins package. Maybe what I'm proposing is safer as a plugin (to leave Locust clean). I'll have to look into the code and see how it interacts with Locust.
Thanks for a quick response. I'll try to take a look this week and maybe come up with a PoC.
In the mean time, should I mark this issue as closed, or keep it up for discussion?
Prerequisites
Description
I've worked pretty extensively with the Locust framework. Locust is a fantastic bare bones (lightweight) framework. I really enjoy using it, and kudos to all the contributors. That being said, it seems to me (unless I'm missing something), that it is very pass/fail oriented. It does not have out of the box solutions for setting/applying custom failure rules. What is considered a failure is very different depending on the use case. Therefore, I believe that the pass/fail is insufficient.
As an example:
If I have a test that hits an endpoint:
some_example/resource?id=123
There are no options to specify what my expectations of this endpoint are. If the endpoint responds in 2000 ms, I might very well consider that a critical failure. Currently, my options are as follows:
I suggest implementing an ability to:
A basic approach would be the extend the request method to accept an optional rule arg and process the rules downstream. The actual rule application would need to occur at the end of the test run since attributes could consist of multiple test run aggregations such as averages. I found it difficult to understand how the output json is generated. Ideally, this would just enhance that output.
The text was updated successfully, but these errors were encountered: