Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Programmatic skip scenario feature #2502

Merged
merged 9 commits into from May 22, 2024

Conversation

PiotrNestor
Copy link
Contributor

@PiotrNestor PiotrNestor commented Mar 11, 2024

@sriv

This is a discussion and proposal for a programmatic skip scenario feature.

Before providing the actual code changes I'd like to discuss, review and agree on how this should be implemented in the different gauge components.

The proposed programmatic scenario skip feature is similar to the C# NUnit programmatic test ignore feature.
Scenario skip should be possible during execution of a correct scenario in a hook or step method.

Note that this is different from the present skip of a scenario during the validation phase when some step is missing or invalid.
Although the idea is that the outcome would be similar where the scenario is reported as skipped.

Implementation proposal:

  1. 'SkipScenarioException' is added to 'gauge-csharp-lib'
  2. 'SkipScenarioException' can be used in a hook or step method to programmatically throw an exception that indicates that the scenario execution is interrupted and results in a scenario skip.
  • 'gauge-dotnet' has logic in 'HookExecutor Execute' and 'StepExecutor Exceute' to catch 'SkipScenarioException' and report the condition with 'ExecutionResult'
  • 'SkipScenarioException' is not a scenario failure. An additional status bool 'skipScenario' is added to 'ExecutionResult'
  1. The gauge runner has already logic to mark a scenario as skipped. This logic is used to handle 'skipScenario' in the 'ExecutionResult' during scenario execution.

Our question is now if the above is clear enough to proceed with actual updates implementation?

Signed-off-by: Piotr Nestorow <piotr.nestorow@systemverification.com>
@PiotrNestor
Copy link
Contributor Author

@sriv

Any comments regarding the above functionality proposal?

@sriv
Copy link
Member

sriv commented Mar 12, 2024

I need to think this through @PiotrNestor . The first thing that hits me is the impact on statistics especially for table driven execution.

And a couple of other scenarios. Will list them in some time.

More importantly, how would this feature help? Genuinely interested in the use case

@PiotrNestor
Copy link
Contributor Author

PiotrNestor commented Mar 13, 2024

@sriv

There are a number of use cases for the feature of programmatically skipping a scenario:

  1. An integration with a test management tool that holds the list of scenarios to be executed and drives the test suite execution. When a scenario starts, an API call to the test management is made to check if the scenario is in the run list. The scenario is then skipped if not in the list. Presently this functionality needs to be achieved using some cumbersome pre-execution step that matches the scenario list with the gauge tag functionality.
  2. Some test condition, configuration or capability is not fulfilled at runtime. It is possibly more suitable to skip the test in stead of failing the test. Presently there is no such option. The 'tag' functionality it too static to handle this.
  3. The programmatic skip functionality exists in other test frameworks. If they have it we want as well :)

@sriv
Copy link
Member

sriv commented Mar 19, 2024

hi @PiotrNestor , apologies for the delay.

As a feature I can understand this can be useful. I am interested to know your thoughts on the below:

  1. gauge takes a list of scenarios as part of its arguments and the scenarios are expected to be run in order. Is this something that can help? i.e. rather than lazy evaluation of skipping a scenario it gets filtered out eariler. I am not referring to tag based filtration rather something like gauge run specs/spec1.spec:4 specs/spec1.spec:32 to run only two specific scenarios. The list can be computed based on input from your test planning tool.

The benefits: it keeps the logic of planning your test execution outside the test suite and your test suite need not connect to the test planner (useful if you are running it in CI).
it also helps to cache the result and keep it such that you dont need to fetch the list to be executed at every run if the data is static.

the cons: It makes the orchestration a bit tricky.
There is also an open issue with table row selection which might hamper this ( but to be fair, this will be the case irrespective of the approach)

  1. data driven execution
  • when using a data table (or a csv) for table driven execution the same scenario gets executed repeatedly for each row. Adding conditional exemption should include a way to ignore the specific table row. (note that this is currently a flaw in gauge's design). to clarify: there is a --table-row flag that controls the execution of a table driven specification. The flaws are:
    a. the table driven execution can be via spec table or scenario table or both. A single flag does not help.
    b. the flag is universal, so there is no way to choose a specific row number for one table driven spec and another for a different one. Or even choose the --table-row flag to apply to a specific spec.

The reason to bring this up is that this could be a case to consider if we are programmatically skipping.

Beyond these, I think we also need to think about bringing in visibility into reasons for skipping the scenario in the reports, but that is a separate thing altogether.

In summary - I guess I am ok with this feature, and the first cut of this would be to ignore a scenario altogether without worrying about the implications on table driven v/s normal execution.

@PiotrNestor
Copy link
Contributor Author

@sriv
We'll summarize how we'd like to proceed

@PiotrNestor
Copy link
Contributor Author

PiotrNestor commented Mar 20, 2024

@sriv

  1. gauge takes a list of scenarios as part of its arguments

This feature is not good enough for the integration with a test management tool selecting a list of scenarios to run. Problems:

  • there is presently no way to automatically extract the list of scenario file:line references
  • the above list must be build before any specific test run because files might have been edited

The possible implementation needs then:

  • parse all spec files and build the file:line references
  • match the requested run scenario list with the above reference list
  • build the 'gauge list_of_files' command to run

If skip scenario during execution is available the above integration is simple.

@sriv
Copy link
Member

sriv commented Mar 20, 2024

there is presently no way to automatically extract the list of scenario file:line references

Have you seen https://manpage.gauge.org/gauge_list before? Basically gauge list --scenarios is expected to do this. We can of course enhance it if needed

I suppose if this list is good enough then comparing them with a run list to come up with a final list should be possible with a few script.

I am reminded of an issue where gauge run should take the list from stdin allowing chaini g of commands (Unix like) to work.

For example I was exploring the option to do gauge run < run.txt . I don't think this works yet, but just wanted to bounce off the idea.

@sriv
Copy link
Member

sriv commented Mar 21, 2024

I just realized that gauge list --scenarios lists the scenario names and not the : format. So I guess that needs to be built.

So, if you still prefer the solution of skipping scenarios programatically, I guess we can go with it.

In an ideal world, I would love to add support for a plugin that determines the run list for a suite, but that is a larger change.

Signed-off-by: Piotr Nestorow <piotr.nestorow@systemverification.com>
@PiotrNestor
Copy link
Contributor Author

@sriv
Yes, we have now an implementation that we're testing
We'll provide a PR when we think it's good :)

Signed-off-by: Piotr Nestorow <piotr.nestorow@systemverification.com>
Signed-off-by: Piotr Nestorow <piotr.nestorow@systemverification.com>
@PiotrNestor
Copy link
Contributor Author

@sriv
We have now an implementation for this functionality.
I've added now:

When the above are accepted I can proceed with the relevant gauge-dotnet and gauge PR.

With the programmatic skip: on the scenario implementation side it's possible to:

  • define in 'BeforeScenario'
[BeforeScenario]
    public void BeforeScenario(ExecutionContext context)
    {
            throw new SkipScenarioException("Skip scenario execution in a before scenario hook");
    }
  • in a step definition
[Step("Skip scenario <reason>")]
public void SkipScenario(string reason)
{
    throw new Gauge.CSharp.Lib.Attribute.SkipScenarioException(reason);
}

Signed-off-by: Piotr Nestorow <piotr.nestorow@systemverification.com>
@chadlwilson
Copy link
Contributor

I think your dependent commits have been merged now, so you can probably rebase this off master and use latest protos from go. Might be easier to start again with go.mod/sum rather than try and rebase commits.

Signed-off-by: Piotr Nestorow <piotr.nestorow@systemverification.com>
Signed-off-by: Piotr Nestorow <piotr.nestorow@systemverification.com>
@PiotrNestor PiotrNestor marked this pull request as ready for review April 3, 2024 09:16
@PiotrNestor
Copy link
Contributor Author

@sriv
This is ready for review ...

The main points of the skip scenario sequence:

  1. In the dotnet implementation, it's possible to throw SkipScenarioException in a scenario before hook or in a test step. Also gauge WriteMessage is used to report the reason for the skip.
  2. The gauge-dotnet reports the skip scenario in the ExecutionResult as Success but with SkipScenario set to True.
  3. The gauge runner stops the scenario execution and completes with the after hooks and teardown. The scenario is reported as skipped
  4. Our tests show correct html and xml reports where the scenario is correctly reported as skipped

Copy link
Contributor

github-actions bot commented Apr 3, 2024

Benchmark Results

java_maven_parallel.csv

Commit CPU Memory Time ExitCode
44ebad3 43% 241196 0:30.34 0
a550d71 43% 229944 0:30.54 0
aca081e 45% 248244 0:30.68 0
696941f 41% 238616 0:32.00 0

java_gradle_serial.csv

Commit CPU Memory Time ExitCode
44ebad3 9% 121916 0:25.68 0
a550d71 9% 117132 0:25.04 0
aca081e 9% 111040 0:24.87 0
696941f 9% 119968 0:24.28 0

java_gradle_multithreaded.csv

Commit CPU Memory Time ExitCode
44ebad3 9% 120368 0:23.62 0
a550d71 9% 122548 0:23.36 0
aca081e 9% 116704 0:23.63 0
696941f 10% 119612 0:23.97 0

java_simple_multithreaded.csv

Commit CPU Memory Time ExitCode
44ebad3 25% 68024 0:14.01 0
a550d71 29% 67584 0:12.32 0
aca081e 29% 71592 0:12.17 0
696941f 28% 69740 0:12.18 0

java_maven_serial.csv

Commit CPU Memory Time ExitCode
44ebad3 81% 233976 0:18.09 0
a550d71 65% 262860 0:21.77 0
aca081e 80% 251776 0:16.80 0
696941f 77% 226384 0:18.66 0

java_gradle_parallel.csv

Commit CPU Memory Time ExitCode
44ebad3 5% 121144 0:45.50 0
a550d71 6% 118652 0:37.12 0
aca081e 5% 121208 0:40.88 0
696941f 5% 107816 0:44.57 0

java_maven_multithreaded.csv

Commit CPU Memory Time ExitCode
44ebad3 66% 210124 0:16.84 0
a550d71 58% 242188 0:19.28 0
aca081e 63% 234180 0:18.12 0
696941f 51% 233064 0:22.33 0

java_simple_parallel.csv

Commit CPU Memory Time ExitCode
44ebad3 24% 71404 0:23.26 0
a550d71 21% 71984 0:25.67 0
aca081e 23% 71632 0:23.81 0
696941f 21% 73692 0:27.55 0

java_simple_serial.csv

Commit CPU Memory Time ExitCode
44ebad3 48% 69940 0:12.77 0
a550d71 53% 67672 0:11.35 0
aca081e 42% 65592 0:14.32 0
696941f 52% 65972 0:11.62 0

Notes

  • The results above are generated by running against seed projects in https://github.com/getgauge/gauge-benchmark
  • These results are not persisted, but on merging to master the benchmark will be rerun.
  • These benchmark are run in Github Actions' agents, which are virtualized. Results are not to be taken as actual benchmarks.Rather, these are indicative numbers and make sense for comparison.

See Workflow log for more details.

@PiotrNestor
Copy link
Contributor Author

@sriv
Any progress on this PR?

@jensakejohansson
Copy link
Contributor

@sriv ping!

@jensakejohansson
Copy link
Contributor

jensakejohansson commented May 21, 2024

Hi!

@sriv Do you have any opinions regarding this PR? We'd like to know if it can be accepted, or if it needs to be improved/redesigned or if it should be considered as rejected altogether?

By accident I found these two requests as well, which I think will be solved by this PR (we only updated Guage & the DotNet-runner, to get it working in Java then the Java-runner also needs the update, obviously).

getgauge/gauge-java#691
getgauge/gauge-java#683

Best regards,

@sriv sriv enabled auto-merge (squash) May 22, 2024 13:14
@sriv
Copy link
Member

sriv commented May 22, 2024

sorry about the delay @PiotrNestor @jensakejohansson - been travelling a lot most summer and work is super tight.

I have approved this, but I guess this PR needs a rebase.

@sriv sriv merged commit f974c53 into getgauge:master May 22, 2024
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants