-
Notifications
You must be signed in to change notification settings - Fork 722
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test Dependency Attribute #51
Comments
This bug duplicates and replaces https://bugs.launchpad.net/nunit-3.0/+bug/740539 which has some discussion. While dependency and ordering are not identical, they are related in that ordering of tests is one way to model dependency. However, other things may impact ordering, such as the level of importance of a test. At any rate, the two problems need to be addressed together. |
I like the MbUnit model a lot:
|
What does MbUnit do if you set up a cyclic "dependency"? On Sun, Nov 3, 2013 at 12:25 PM, ashes999 notifications@github.com wrote:
|
@CharliePoole if you create a cycle or specify a non-existent test method dependency, MbUnit throws a runtime exception. Since depending on a class requires the type, that would be similar (depending on a non-test) or a compile-time error (that type doesn't exist). |
Any update on this issue? I chose MbUnit because of the ordering. Now it's on indefinite hiatus, I need to look for an alternative. It would be nice if NUnit can support this essential feature in integration testing. |
Sorry, no update yet, although I have also come from MbUnit and use this for some of our integration tests. If you just want ordering of tests within a test class, NUnit will run the tests in alphabetical order within a test fixture. This is unsupported and undocumented, but it works until we have an alternative. |
I thought this was coming in v3? |
Yes, but as yet it isn't being worked on. After the first 3.0 alpha, we'll
|
For 3.0, we will implement a test order attribute, rather than an MbUnit-like dependency attribute. See issue #170 |
See my comment in #170. Ordering is very limited and prone to maintenance (unless you use multiples of ten so you can insert tests in the middle without reordering everything). MbUnit has arbitrary dependency graphs, which I (or maybe I should say "we" since I'm not the only one) really need. Depending on an alphabetic order is a crutch, and a pretty weak one considering this could change at any time. |
Hi, On Thu, Aug 7, 2014 at 6:10 AM, Ashiq A. notifications@github.com wrote:
Yes. The idea to use ordering is based on an unstated assumption: that Note that this isssue relates to the ordering of Test methods, not Basically, we decided that #170 requires too much design work to The use of an OrderAttribute was viewed as a way of quickly giving
Indeed. We have always advised people not to use that for exactly that Charlie |
I have not used MbUnit in years but would like to add to this discussion if my memory serves me correctly. Assuming [Test Z] dependson [Test A]. I then run [Test Z]. It appears that NUnit evaluates the result of [Test A] first. If there is no available result then NUnit will automatically run [Test A] before attempting to run the requested [Test Z]. NUnit will only run [Test Z] if [Test A] passes. Otherwise, [Test Z] will be marked as inconclusive and indicate its dependency on the failed [Test A]. These might provide some insight: |
@circa1741: We will work on this in the "future" by which I mean the release after 3.0. Full-on dependency is really a pretty complex beast to implement and we are already taking on a lot in 3.0. Ordering of tests is a bandaid if what you want is true dependency, but it's pretty easy to implement. By doing ordering (#170) in 3.0 we do run a risk: some users will treat it as the answer to their dependency problems and come to depend on it in the wrong context. Still, it seems better than doing nothing. I'd like to find the time to write an article on the various kinds of dependency and ordering and how they differ in usage and implementation... maybe... ;-) |
Correction: after I wrote the preceding comment, I noticed that #170 is also scheduled for 'future'. We'll continue to discuss whether it's possible to include some ordering feature in 3.0 as opposed to 3.2, but at the moment they are not planned. |
(Samples taken from http://blog.bits-in-motion.com/search?q=mbunit) When writing integration tests, it is sometimes useful to chain several tests together that capture a logical sequence of operations. MbUnit can capture this chaining either with dependencies: |
Thanks for the example code. It gives something to aim for. Your first example is relevant to this issue. The second is exactly what we are planning for #170. |
I have an idea that is more of a twist for dependency and ordering. The discussion, so far, regarding dependency is "go ahead and run Test H (and possibly 8 other tests) only if Test 8 passes." In other words, there is no point of running Test H because if Test 8 fails then I know that Test H will also fail. How about a dependency when a test fails? Scenario: Now, because the tests are "not independent" I know that if Test C fails then all the following tests will also fail. Therefore, I need more tests to run in order to get a bigger picture of the Smoke Test. I need to be able to configure to run Test 1 if Test A fails, Test 2 if Test B fails, Test 3 if Test C fails, etc. My Test 3 is designed to be independent of Test B so if this fails then I have a better understanding of why Test C failed earlier. As it turns out my Tests 4 (for Test D), 5 (for E), 5 (for F), etc. all pass. Then I now understand that only the feature that was covered by Test C is the issue. Why not run Tests 1, 2, 3, etc. instead? Well, because since those are isolated and independent tests then I am not doing Integration Tests. Again, I need a Smoke Test that covers a lot of ground. Maybe something like:
This will allow finer control in my automation test design. |
How about something like these attributes instead:
Please note to which test these are attached to. These attributes should be useable in different levels (and in any): assembly, test fixture, test. [DependsOnPassing("Test E")]
[RunOnFail("Test N")]
Test N
Test E
|
Copied from #1031, which obviously duplicates this. Hopefully the example helps and the keywords direct others here... I have a couple of cases where I would like to mark a test as a prerequisite of another test. It would be nice to have an attribute that indicated tests that were prerequisites of the current test. In the case where routines depend on one another, it is possible to know that a given test is going to fail because a sub-routine failed its test. If the test is a long running one, there really isn't any point in running the test if the sub-routine is broken anyway. Contrived Example:
Given the example, if the test I would conceive of this working by chaining the tests:
|
Are there any previsions about when this feature will be available? I haven't found this information in others threads, maybe it is a duplicate question. |
It's in the 'Future' milestone, that means after 3.2, which is the latest actual milestone we have. However, we are about to reallocate issues to milestones, so watch for changes. |
I have no idea if I can edit a wiki. Never tried, and have no idea how to. Can you give me a starting "push"? :) |
@oznetmaster, editing the wiki is pretty easy,
|
@oznetmaster What you describe is pretty much how the parallel dispatcher already works. Items are assigned to individual queues depending on their characteristics. Currently, it takes into account parallelizability and Apartment requirements. All items, once queued, are ready to be executed. It was always planned that dependency queues would be added as a further step. I plan to use #1096 as an "excuse" to implement that infrastructure. Once it's implemented, it can then be further exposed to users as called for in #51. I'll be preparing a spec for the underlying scheduling mechanism (dispatcher) as well and I'd like your comments on it. |
@oznetmaster I created an empty page for you: https://github.com/nunit/dev/wiki/Test-Dependency-Attribute |
Are we committed to calling the attribute DependsOnAttribute? How about something more general like "DependenciesAttribute"? It is nigling, I know :( |
You should write it up as you think it should be. Then we'll all fight about it. :-) |
So I have :) |
Suggestion: add a section that explains motivation for each type of dependency. For example, when would a user typically want to use AfterAny, etc. As a developer, it's always tempting to add things for "completeness." Trying to imagine a user needing each feature is a useful restraint on this tendency. Unfortunately, users generally only tell us when something is missing, not when something is not useful to them. |
For what its worth, my input: I'd rather not define a dependency per test method, that is becoming rather tedious (and hard to maintain) if you have more than a few tests. Instead I want to establish an order between test fixtures. This comes from the following case we currently have: We are using MSTest for ordered tests currently. Except that it is MSTest, it works great, because with the test ordering I can express two thing about a test: Certain tests may not execute before another test. Other tests have a dependency on another test and may only be executed after the other test(s) have been executed. Let's say that the integration test:
With MSTest I can express this case fine: Each 'ordered test' in MSTest can contain ordered tests themselves. Also, ordered tests can have a flag set to abort if one of the tests fail.
For example, To be concrete, if was thinking of a interface like this to express the test fixture ordering:
This is much more maintainable (and obvious) as having dependency attributes on every fixture and scales much better as the number of fixtures increase. Note for test ordering within fixtures, I would simply use the existing For test ordering between fixtures I have set-up a prototype, and I have found that expressing dependencies between fixtures by using attributes becomes messy, even only with a few tests. Please also note that the prototype wouldn't allow ordering beyond the namespace the fixture is defined in because each fixture is part of a parent test with the name of the namespace. I would need to implement my own |
Update from my side: In the mean time I've managed to implement test ordering without the need to fork NUnit. It is "good enough" for me, so I use it now. It is already a lot better than the fragile state of many MSTest ordered tests. |
Out of curiosity -- any chance the Dependency feature is planned for the next release? |
No plans at the moment. FYI, you can see that here on GitHub by virtue of the fact that it's not assigned to anyone and has no milestone specified. For normal priority items, like this one, we don't usually pre-plan it for a particular release. We reserve that for high and critical items. This one will only get into a release plan when somebody decides they want to do it, self-assigns it and brings it to a point where it's ready for merging. In fact, although not actually dependent on it, this issue does need a bunch of stuff from #164 to be effectively worked on. I'm working on that and expect to push it into the next release. |
Got referred here from there. I'm a firm believer that any fault should only ever cause one unit test to fail, having a whole bunch of others fail because they're dependent on that fault not being there is.. undesirable at best, and more than a little time consuming trying to track down which of the failing tests is the relevant one. Edit: Just looking at my existing code, I've got a 'Prerequisite(Action action)' method in many of my test fixtures that wraps the call to action in a try/catch AssertionException/throw Inconclusive, but it also does some cleanup stuff like 'substitute.ClearReceivedCalls' (from NSubtitute) and empties a list populated by 'testObj.PropertyChanged += (sender,e) => receivedEvents.Add(e.PropertyName)'; otherwise past actions potentially contaminate calls to 'substitute.Received..' Might be necessary to also include some sort of 'Cleanup' method in the dependency attribute to support things like this. |
@Flynn1179 - I agree with you when it comes to unit tests. However, NUnit is also a great tool for other kinds of tests. For example, we use it for testing embedded firmware and are really missing this feature... |
@Flynn1179 Completely agree with you. There are techniques to prevent spurious failures such as you describe that don't "depend" on having a test dependency feature. In general, use an assumption to test those things that are actually tested in a different test and are required for your test to make sense. It was a goal of NUnit 3 to extend NUnit for effective use with non-unit tests. We really have not done that yet - it may await another major release. OTOH, users are continuing to use it for those other purposes and trying to find clever ways to deal with the limitations. Here and there we have added small features and enhancements to help them, but it's really still primarily a unit-test framework. Personally, I doubt I would want to use dependency as part of high-level testing myself. Rather, I'd prefer to have a different kind of fixture that executed a series of steps in a certain order, reporting success or failure of each step and either continuing or stopping based on some property of the step. That, however, is a whole other issue. @espenalb I'd be interesting to know what you feel is needed particularly for testing embedded firmware. |
We are actually very happy about what NUnit offers. We use a combination of FixtureSetup/Setup/test attributes for configuring the device (Including flashing firmware) Then we use different interfaces (serial port, jtag, ethernet) to interact with the devices, typically we send some commands and then observe results. Results can be command response, or in advanced tests we use dedicated hardware equipment for measuring device behavior. The NUnit assertion macros, and FluentAssertions are then used to verify that everything is ok. With the test dependency attribute, we would have one failing test, then n ignored/skipped tests where the skipped tests could clearly state that they were not executed because the other tests failed... Another difference from regular unit testing is heavy use of the log writer. There is one issue there regarding multithreading and log which I will create a separate issue for if it does not allready exist. Bottom line from us - we are very happy with NUnit as a test harness for integration testing. It gives us excellent support for a lot of advanced scenarios by using C# to interact with Device Under Test and other lab equipment. With ReportUnit we then get nice html reports and we we also get Jenkins integration by using the rev2 nunit test output. |
@espenalb - Complete aside, but the Jenkins plugin has recently been updated to read NUnit 3 output. 🙂 |
Can someone give a small update about the status this feature? Is it planned? I heard that you are in general planing to "open" NUnit for "non-unit test" tests as well (which is basically the case for us...). I think this attribute would be one step torwards it :-) |
This feature is still in design phase, so other than using external libraries there is no built-in support currently. |
I am sorry to pull up an old thread but during the course of working though NUnit with a friend we stumbled into a case where if we had such a feature we could start to create integration tests (I realize NUnit is a Unit Testing Framework, but it seems like we could get what we want if we had Test Dependency). First here's an updated link to the proposed spec (the link from CharliePoole here #51 (comment) was dead): https://github.com/nunit/docs/wiki/Test-Dependency-Attribute-Spec Now for a use case; consider the following toy Program and Associated Tests namespace ExampleProgram
{
using System.Collections;
using NUnit.Framework;
public static class ExampleClass
{
public static int Add(int a, int b)
{
return a - b;
}
public static int Increment(int a)
{
return Add(a, 1);
}
}
public class ExampleClassTests
{
[TestCaseSource(typeof(AddTestCases))]
public void Add_Tests(int a, int b, int expected)
{
int actual = ExampleClass.Add(a, b);
Assert.That(actual, Is.EqualTo(expected));
}
[TestCaseSource(typeof(IncrementTestCases))]
public void Increment_Tests(int a, int expected)
{
int actual = ExampleClass.Increment(a);
Assert.That(actual, Is.EqualTo(expected));
}
}
internal class IncrementTestCases : IEnumerable
{
public IEnumerator GetEnumerator()
{
yield return new TestCaseData(0, 1);
yield return new TestCaseData(-1, 0);
yield return new TestCaseData(1, 2);
}
}
internal class AddTestCases : IEnumerable
{
public IEnumerator GetEnumerator()
{
yield return new TestCaseData(0, 0, 0);
yield return new TestCaseData(0, 2, 2);
yield return new TestCaseData(2, 0, 2);
yield return new TestCaseData(1, 1, 2);
}
}
} As an implementer I know that if any Unit Tests around Doing a lot of research online it seems like others have worked around this by using a combination of factors (none of which are explicitly clear that
Neither of these seem to scale well, or even work for that matter when you use other features such as Parallel and all require some external "post processing" after the NUnit run has completed. Is this the best path forward (if we were to use pure NUnit)? Is this feature still being worked on? (In other-words if a PR were submitted would it jam up anyone else working on something related?) There is lots of good discussion in this thread about cyclic dependencies and other potential issues with this feature, it is obviously not easy to fix otherwise someone would have done it already. I am sure adding Parallel and TestCaseSource into the mix also increase complexity. I intend to dig more at some point, but before doing so wanted to make sure that this was not a solved problem or plans-in-the-works. |
@aolszowka Nobody has assigned it to themselves, which is supposed to mean that nobody is working on it. Smart of you to ask, none the less! If you want to work on it, some team member will probably assign it to themselves and "keep an eye" on you, since GitHub won't let us assign issues to non-members. I made this a feature and gave it its "normal" priority back when I was project lead. I intended to work on it "some day" but never did and never will now that I'm not active in the project. I'm glad to correspond with you over any issues you find if you take it on. My advice is to NOT do what I tried to do: write a complete spec and then work toward it. As you can read in the comments, we kept finding things to disagree about in the spec and nobody ever moved it to implementation. AFAIK (or remember) the prerequisite work in how tests are dispatched has been done already. I would pick one of the three types of dependency (see my two+ years ago comment) and just one use case and work on it. We won't want to release something until we are sure the API is correct, so you should probably count on a long-running feature branch that has to be periodically rebased or merged from master. Big job! |
Yes - as far as I'm concerned! |
Is anyone still interested in this being implemented? I might be interested in giving it a go? |
It is still open, so feel free to give it a shot :-) |
@Shiney Any possibility of doing this? |
Not in the short term, I'm planning on doing some parental leave at some point so if I end up having some free time to keep my C# up to date and this isn't yet done I will probably do this, but I have no free time at the moment. |
Hi,
I have a web app with extensive automated testing. I have some installation tests (delete the DB tables and reinstall from scratch), upgrade tests (from older to newer schema), and then normal web tests (get this page, click this, etc.)
I switched from NUnit to MbUnit because it allowed me to specify test orders via dependency (depend on a test method or test fixture). I switched back to NUnit, and would still like this feature.
The current work-around (since I only use the NUnit GUI) is to order test names alphabetically, and run them fixture by fixture, with the installation/first ones in their own assembly.
The text was updated successfully, but these errors were encountered: