Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test Dependency Attribute #51

Open
ashes999 opened this issue Nov 3, 2013 · 63 comments
Open

Test Dependency Attribute #51

ashes999 opened this issue Nov 3, 2013 · 63 comments

Comments

@ashes999
Copy link

ashes999 commented Nov 3, 2013

Hi,

I have a web app with extensive automated testing. I have some installation tests (delete the DB tables and reinstall from scratch), upgrade tests (from older to newer schema), and then normal web tests (get this page, click this, etc.)

I switched from NUnit to MbUnit because it allowed me to specify test orders via dependency (depend on a test method or test fixture). I switched back to NUnit, and would still like this feature.

The current work-around (since I only use the NUnit GUI) is to order test names alphabetically, and run them fixture by fixture, with the installation/first ones in their own assembly.

@CharliePoole
Copy link
Contributor

This bug duplicates and replaces https://bugs.launchpad.net/nunit-3.0/+bug/740539 which has some discussion.

While dependency and ordering are not identical, they are related in that ordering of tests is one way to model dependency. However, other things may impact ordering, such as the level of importance of a test. At any rate, the two problems need to be addressed together.

@ashes999
Copy link
Author

ashes999 commented Nov 3, 2013

I like the MbUnit model a lot:

  • Dependency on another test suite: annotate test (or test fixture) with [DependsOn(typeof(AnotherFixtureType))]
  • Dependency on another test: annotate test with [DependsOn("TestInThisFixture")]

@CharliePoole
Copy link
Contributor

What does MbUnit do if you set up a cyclic "dependency"?

On Sun, Nov 3, 2013 at 12:25 PM, ashes999 notifications@github.com wrote:

I like the MbUnit model a lot:

  • Dependency on another test suite: annotate test (or test fixture)
    with [DependsOn(typeof(AnotherFixtureType))]
  • Dependency on another test: annotate test with
    [DependsOn("TestInThisFixture")]


Reply to this email directly or view it on GitHubhttps://github.com//issues/51#issuecomment-27653262
.

@ashes999
Copy link
Author

ashes999 commented Nov 4, 2013

@CharliePoole if you create a cycle or specify a non-existent test method dependency, MbUnit throws a runtime exception. Since depending on a class requires the type, that would be similar (depending on a non-test) or a compile-time error (that type doesn't exist).

@candychiu
Copy link

Any update on this issue? I chose MbUnit because of the ordering. Now it's on indefinite hiatus, I need to look for an alternative. It would be nice if NUnit can support this essential feature in integration testing.

@rprouse
Copy link
Member

rprouse commented May 13, 2014

Sorry, no update yet, although I have also come from MbUnit and use this for some of our integration tests. If you just want ordering of tests within a test class, NUnit will run the tests in alphabetical order within a test fixture. This is unsupported and undocumented, but it works until we have an alternative.

@faddison
Copy link

I thought this was coming in v3?

@CharliePoole
Copy link
Contributor

Yes, but as yet it isn't being worked on. After the first 3.0 alpha, we'll
add further features.
On Jun 13, 2014 8:28 PM, "fraser addison" notifications@github.com wrote:

I thought this was coming in v3?


Reply to this email directly or view it on GitHub
#51 (comment)
.

@CharliePoole CharliePoole added this to the 3.0 milestone Jul 29, 2014
@CharliePoole
Copy link
Contributor

For 3.0, we will implement a test order attribute, rather than an MbUnit-like dependency attribute. See issue #170

@CharliePoole CharliePoole modified the milestones: 3.0, Future Aug 6, 2014
@ashes999
Copy link
Author

ashes999 commented Aug 7, 2014

See my comment in #170. Ordering is very limited and prone to maintenance (unless you use multiples of ten so you can insert tests in the middle without reordering everything). MbUnit has arbitrary dependency graphs, which I (or maybe I should say "we" since I'm not the only one) really need.

Depending on an alphabetic order is a crutch, and a pretty weak one considering this could change at any time.

@CharliePoole
Copy link
Contributor

Hi,

On Thu, Aug 7, 2014 at 6:10 AM, Ashiq A. notifications@github.com wrote:

See my comment in #170. Ordering is very limited and prone to maintenance (unless you use multiples of ten so you can insert tests in the middle without reordering everything). MbUnit has arbitrary dependency graphs, which I (or maybe I should say "we" since I'm not the only one) really need.

Yes. The idea to use ordering is based on an unstated assumption: that
it will hardly be used at all. Trying to control the runtime order of
all your tests is a really bad idea. However, in rare cases, it may
be desirable to ensure some test runs first. For such limited use, an
integer ordering is fine and the difficulty of inserting new items in
the order might well serve as a discouragement to unnecessarily
ordering tests.

Note that this isssue relates to the ordering of Test methods, not
test fixtures. Issue #170 applies to test fixtures and not methods as
written, since it uses a Type as the dependent item. That said, the
examples in #170 seem to imply that ordering of methods is desired.

Basically, we decided that #170 requires too much design work to
include in the 3.0 release without further delaying it. We elected -
in this and other cases - to limit new features in favor of a quicker
release. Assigning #170 to the "Future" milestone doesn't mean it
won't happen. Most likely we will address it in a point release.

The use of an OrderAttribute was viewed as a way of quickly giving
"something" to those who want to control order of test method
execution. We felt we could get it in quickly. In fact, we may have
been wrong. Thinking about it further, I can see that it may introduce
a capability that is difficult to maintain in the face of parallel
test execution. In fact, a general dependency approach may be what we
need. For the moment, I'm moving both issues out of the 3.0 milestone
until we can look into them further.

Depending on an alphabetic order is a crutch, and a pretty weak one considering this could change at any time.

Indeed. We have always advised people not to use that for exactly that
reason. In fact, it is not guaranteed in NUnit 3.0.

Charlie

@circa1741
Copy link

I have not used MbUnit in years but would like to add to this discussion if my memory serves me correctly.

Assuming [Test Z] dependson [Test A]. I then run [Test Z]. It appears that NUnit evaluates the result of [Test A] first. If there is no available result then NUnit will automatically run [Test A] before attempting to run the requested [Test Z]. NUnit will only run [Test Z] if [Test A] passes. Otherwise, [Test Z] will be marked as inconclusive and indicate its dependency on the failed [Test A].

These might provide some insight:
https://code.google.com/p/mb-unit/source/browse/trunk/v3/src/MbUnit/MbUnit/Framework/DependsOnAssemblyAttribute.cs?spec=svn3066&r=1570

https://code.google.com/p/mb-unit/source/browse/trunk/v3/src/MbUnit/MbUnit/Framework/DependsOnAssemblyAttribute.cs?r=1570

@CharliePoole
Copy link
Contributor

@circa1741: We will work on this in the "future" by which I mean the release after 3.0. Full-on dependency is really a pretty complex beast to implement and we are already taking on a lot in 3.0.

Ordering of tests is a bandaid if what you want is true dependency, but it's pretty easy to implement.

By doing ordering (#170) in 3.0 we do run a risk: some users will treat it as the answer to their dependency problems and come to depend on it in the wrong context. Still, it seems better than doing nothing.

I'd like to find the time to write an article on the various kinds of dependency and ordering and how they differ in usage and implementation... maybe... ;-)

@CharliePoole
Copy link
Contributor

Correction: after I wrote the preceding comment, I noticed that #170 is also scheduled for 'future'.

We'll continue to discuss whether it's possible to include some ordering feature in 3.0 as opposed to 3.2, but at the moment they are not planned.

@circa1741
Copy link

(Samples taken from http://blog.bits-in-motion.com/search?q=mbunit)

When writing integration tests, it is sometimes useful to chain several tests together that capture a logical sequence of operations.

MbUnit can capture this chaining either with dependencies:
dependson
Also allows [DependsOn(typeof(ClassName.MethodName))]

Or with explicit ordering:
order

@CharliePoole
Copy link
Contributor

Thanks for the example code. It gives something to aim for. Your first example is relevant to this issue. The second is exactly what we are planning for #170.

@circa1741
Copy link

I have an idea that is more of a twist for dependency and ordering.

The discussion, so far, regarding dependency is "go ahead and run Test H (and possibly 8 other tests) only if Test 8 passes." In other words, there is no point of running Test H because if Test 8 fails then I know that Test H will also fail.

How about a dependency when a test fails?

Scenario:
I need a Smoke Test that covers a lot of ground. So, I am planning a Test Fixture that is an end-to-end test that has basic coverage of many of the SUT's features. The tests on said test fixture will use Test Ordering and are "not independent." The test order will be Test A then Test B then Test C, etc.

Now, because the tests are "not independent" I know that if Test C fails then all the following tests will also fail. Therefore, I need more tests to run in order to get a bigger picture of the Smoke Test.

I need to be able to configure to run Test 1 if Test A fails, Test 2 if Test B fails, Test 3 if Test C fails, etc.

My Test 3 is designed to be independent of Test B so if this fails then I have a better understanding of why Test C failed earlier. As it turns out my Tests 4 (for Test D), 5 (for E), 5 (for F), etc. all pass. Then I now understand that only the feature that was covered by Test C is the issue.

Why not run Tests 1, 2, 3, etc. instead? Well, because since those are isolated and independent tests then I am not doing Integration Tests. Again, I need a Smoke Test that covers a lot of ground.

Maybe something like:

  • [DependsOnPassing("Test so and so")]
  • [DependsOnFailing("Test blah blah blah")]

This will allow finer control in my automation test design.

@circa1741
Copy link

How about something like these attributes instead:

  • [DependsOnPassing("Test so and so")]
  • [RunOnFail("Test blah blah blah")]

Please note to which test these are attached to. These attributes should be useable in different levels (and in any): assembly, test fixture, test.

[DependsOnPassing("Test E")]
Test F

  • Test E will automatically be executed if its result is unknown.
  • Only then will Test F be determined if it should run or not.

[RunOnFail("Test N")]
Test I

  • If Test I fails then Test N will automatically be executed.

Test N

  • Test N may be executed on its own.
  • But it will also automatically be executed if Test I fails.

Test E

  • Test E may be executed on its own.
  • But it will also automatically be executed if its result is unknown because Test F depends on this test passing first.

@JeffCave
Copy link

Copied from #1031, which obviously duplicates this. Hopefully the example helps and the keywords direct others here...

I have a couple of cases where I would like to mark a test as a prerequisite of another test. It would be nice to have an attribute that indicated tests that were prerequisites of the current test.

In the case where routines depend on one another, it is possible to know that a given test is going to fail because a sub-routine failed its test. If the test is a long running one, there really isn't any point in running the test if the sub-routine is broken anyway.

Contrived Example:

public static class Statistics
{
    public static double Average(IEnumerable<double> values)
    {
        double sum = 0;
        double count = 0;
        foreach (var v in values)
        {
            sum += v;
            count++;
        }
        return sum / count;
    }

    public static double MeanVariance(IEnumerable<double> values)
    {
        var avg = Average(values);
        var variance = new List<double>();
        foreach (var v in values)
        {
            variance.Add(Math.Abs(avg - v));
        }
        avg = Average(variance);
        return avg;
    }
}

[TestFixture]
public class TestStatistics
{
    [Test]
    public void Average()
    {
        var list = new List<double> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 0 };
        var avg = Statistics.Average(list);
        Assert.AreEqual(4.5, avg);
    }

    [Test]
    //[Prerequisite("TestStatistics.Average")]
    public void MeanVariance()
    {
        //try { this.Average(); } catch { Assert.Ignore("Pre-requisite test 'Average' failed."); }
        var list = new List<double> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 0 };
        var variance = Statistics.MeanVariance(list);
        Assert.AreEqual(0, variance);
    }
}

Given the example, if the test Average fails, it makes sense to not bother testing MeanVariance.

I would conceive of this working by chaining the tests:

  • if MeanVariance is run, Average is forced to run first.
  • If Average has already been run, the results can be reused.
  • If Average fails, MeanVariance is skipped.

@rladeira
Copy link

Are there any previsions about when this feature will be available? I haven't found this information in others threads, maybe it is a duplicate question.

@CharliePoole
Copy link
Contributor

It's in the 'Future' milestone, that means after 3.2, which is the latest actual milestone we have. However, we are about to reallocate issues to milestones, so watch for changes.

@CharliePoole CharliePoole modified the milestones: Future, Backlog Dec 4, 2015
@oznetmaster
Copy link
Contributor

I have no idea if I can edit a wiki. Never tried, and have no idea how to. Can you give me a starting "push"? :)

@rprouse
Copy link
Member

rprouse commented Feb 24, 2016

@oznetmaster, editing the wiki is pretty easy,

  1. Pick a page you want to add a link to your new wiki page from, probably the Specifications page,
  2. Edit the page by clicking the button
  3. Add a link by surrounding text with double square brackets like [[My NUnit Spec]]
  4. Save the page
  5. Viewing the new link, it is red indicating the page does not exist
  6. Click on the red link, it will take you to a create new page
  7. Edit the page as you would an issue using GitHub markdown and save

@CharliePoole
Copy link
Contributor

@oznetmaster What you describe is pretty much how the parallel dispatcher already works. Items are assigned to individual queues depending on their characteristics. Currently, it takes into account parallelizability and Apartment requirements. All items, once queued, are ready to be executed.

It was always planned that dependency queues would be added as a further step. I plan to use #1096 as an "excuse" to implement that infrastructure. Once it's implemented, it can then be further exposed to users as called for in #51. I'll be preparing a spec for the underlying scheduling mechanism (dispatcher) as well and I'd like your comments on it.

@CharliePoole
Copy link
Contributor

@oznetmaster I created an empty page for you: https://github.com/nunit/dev/wiki/Test-Dependency-Attribute

@oznetmaster
Copy link
Contributor

Are we committed to calling the attribute DependsOnAttribute? How about something more general like "DependenciesAttribute"?

It is nigling, I know :(

@CharliePoole
Copy link
Contributor

You should write it up as you think it should be. Then we'll all fight about it. :-)

@oznetmaster
Copy link
Contributor

So I have :)

@CharliePoole
Copy link
Contributor

Suggestion: add a section that explains motivation for each type of dependency. For example, when would a user typically want to use AfterAny, etc.

As a developer, it's always tempting to add things for "completeness." Trying to imagine a user needing each feature is a useful restraint on this tendency. Unfortunately, users generally only tell us when something is missing, not when something is not useful to them.

@Sebazzz
Copy link

Sebazzz commented Jun 8, 2016

For what its worth, my input:

I'd rather not define a dependency per test method, that is becoming rather tedious (and hard to maintain) if you have more than a few tests. Instead I want to establish an order between test fixtures. This comes from the following case we currently have: We are using MSTest for ordered tests currently. Except that it is MSTest, it works great, because with the test ordering I can express two thing about a test: Certain tests may not execute before another test. Other tests have a dependency on another test and may only be executed after the other test(s) have been executed.

Let's say that the integration test:

  • Uses a test database with several user accounts in it
  • The first few tests execute some tests using the test data, and also create test data themselves to be used in a later test. Note we have a dependency relationship here. Some tests may not execute if earlier tests fail.
  • Then some browser-automation tests happen. They should be executed as late as possible, because they take a lot of time and we want to have feedback from earlier (faster) tests first.
  • Finally, some logic is tested which deletes an entire user account. Note we have a must not execute before relationship here: If this test were to be done before the other tests, the other tests would fail.

With MSTest I can express this case fine: Each 'ordered test' in MSTest can contain ordered tests themselves. Also, ordered tests can have a flag set to abort if one of the tests fail.

             MyPrimaryOrderedTest
              /      |         \
   DomainTests  BrowserTests  DestructiveTests
    /   |  \       /  |   \      |   \ 
   A    B   C     D   E    F     G    H 

For example, MyPrimaryOrderedTest has the fail 'abort on failure' set to false. There is nothing preventing BrowserTests to execute if DomainTests fail. However, DomainTests itself has the flag set to true so test C is not execute if A or B fail. Note that A till H can either be an ordered test definition itself or a test fixture.

To be concrete, if was thinking of a interface like this to express the test fixture ordering:

interface ITestCollection {
    IEnumerable<Type> GetFixtureTypes();
    bool ContinueOnFailure { get; }
}

This is much more maintainable (and obvious) as having dependency attributes on every fixture and scales much better as the number of fixtures increase.

Note for test ordering within fixtures, I would simply use the existing OrderAttribute for that. I think test methods should not have inter-fixture test dependencies, because that makes the test structure too complex and unmaintainable.

For test ordering between fixtures I have set-up a prototype, and I have found that expressing dependencies between fixtures by using attributes becomes messy, even only with a few tests. Please also note that the prototype wouldn't allow ordering beyond the namespace the fixture is defined in because each fixture is part of a parent test with the name of the namespace. I would need to implement my own ITestAssemblyBuilder to work around that but NUnit is hardcoded to use the current DefaultTestAssemblyBuilder.

@CharliePoole CharliePoole removed this from the Backlog milestone Jul 25, 2016
@Sebazzz
Copy link

Sebazzz commented Sep 3, 2016

Update from my side: In the mean time I've managed to implement test ordering without the need to fork NUnit. It is "good enough" for me, so I use it now. It is already a lot better than the fragile state of many MSTest ordered tests.

@ravensorb
Copy link

Out of curiosity -- any chance the Dependency feature is planned for the next release?

@CharliePoole
Copy link
Contributor

No plans at the moment. FYI, you can see that here on GitHub by virtue of the fact that it's not assigned to anyone and has no milestone specified.

For normal priority items, like this one, we don't usually pre-plan it for a particular release. We reserve that for high and critical items. This one will only get into a release plan when somebody decides they want to do it, self-assigns it and brings it to a point where it's ready for merging.

In fact, although not actually dependent on it, this issue does need a bunch of stuff from #164 to be effectively worked on. I'm working on that and expect to push it into the next release.

@Flynn1179
Copy link

Flynn1179 commented May 22, 2017

Relevant: https://stackoverflow.com/questions/44112739/nunit-using-a-separate-test-as-a-setup-for-a-subsequent-test

Got referred here from there. I'm a firm believer that any fault should only ever cause one unit test to fail, having a whole bunch of others fail because they're dependent on that fault not being there is.. undesirable at best, and more than a little time consuming trying to track down which of the failing tests is the relevant one.

Edit: Just looking at my existing code, I've got a 'Prerequisite(Action action)' method in many of my test fixtures that wraps the call to action in a try/catch AssertionException/throw Inconclusive, but it also does some cleanup stuff like 'substitute.ClearReceivedCalls' (from NSubtitute) and empties a list populated by 'testObj.PropertyChanged += (sender,e) => receivedEvents.Add(e.PropertyName)'; otherwise past actions potentially contaminate calls to 'substitute.Received..'

Might be necessary to also include some sort of 'Cleanup' method in the dependency attribute to support things like this.

@espenalb
Copy link

@Flynn1179 - I agree with you when it comes to unit tests. However, NUnit is also a great tool for other kinds of tests. For example, we use it for testing embedded firmware and are really missing this feature...

@CharliePoole
Copy link
Contributor

@Flynn1179 Completely agree with you. There are techniques to prevent spurious failures such as you describe that don't "depend" on having a test dependency feature. In general, use an assumption to test those things that are actually tested in a different test and are required for your test to make sense.

It was a goal of NUnit 3 to extend NUnit for effective use with non-unit tests. We really have not done that yet - it may await another major release. OTOH, users are continuing to use it for those other purposes and trying to find clever ways to deal with the limitations. Here and there we have added small features and enhancements to help them, but it's really still primarily a unit-test framework.

Personally, I doubt I would want to use dependency as part of high-level testing myself. Rather, I'd prefer to have a different kind of fixture that executed a series of steps in a certain order, reporting success or failure of each step and either continuing or stopping based on some property of the step. That, however, is a whole other issue.

@espenalb I'd be interesting to know what you feel is needed particularly for testing embedded firmware.

@espenalb
Copy link

We are actually very happy about what NUnit offers.

We use a combination of FixtureSetup/Setup/test attributes for configuring the device (Including flashing firmware)

Then we use different interfaces (serial port, jtag, ethernet) to interact with the devices, typically we send some commands and then observe results. Results can be command response, or in advanced tests we use dedicated hardware equipment for measuring device behavior.

The NUnit assertion macros, and FluentAssertions are then used to verify that everything is ok.
By definition, these are all integration tests - and by nature a lot slower than a regular unit tests. The test dependency issue is therefore sorely missed - it is no point in verifying for example sensor performance if the command to enable sensor was rejected. The ability to pick up one test where another completed is therefore very valuable.

With the test dependency attribute, we would have one failing test, then n ignored/skipped tests where the skipped tests could clearly state that they were not executed because the other tests failed...

Another difference from regular unit testing is heavy use of the log writer. There is one issue there regarding multithreading and log which I will create a separate issue for if it does not allready exist.

Bottom line from us - we are very happy with NUnit as a test harness for integration testing. It gives us excellent support for a lot of advanced scenarios by using C# to interact with Device Under Test and other lab equipment.

With ReportUnit we then get nice html reports and we we also get Jenkins integration by using the rev2 nunit test output.

@ChrisMaddock
Copy link
Member

ChrisMaddock commented May 22, 2017

we also get Jenkins integration by using the rev2 nunit test output.

@espenalb - Complete aside, but the Jenkins plugin has recently been updated to read NUnit 3 output. 🙂

@DannyBraig
Copy link

Can someone give a small update about the status this feature? Is it planned?
In my department we are doing very long-running tests, which logically really depend on each other.
Some kind of "Test-Dependency" would be really interesting and helping for us...

I heard that you are in general planing to "open" NUnit for "non-unit test" tests as well (which is basically the case for us...). I think this attribute would be one step torwards it :-)

@Sebazzz
Copy link

Sebazzz commented Jul 29, 2017

This feature is still in design phase, so other than using external libraries there is no built-in support currently.

@aolszowka
Copy link

I am sorry to pull up an old thread but during the course of working though NUnit with a friend we stumbled into a case where if we had such a feature we could start to create integration tests (I realize NUnit is a Unit Testing Framework, but it seems like we could get what we want if we had Test Dependency).

First here's an updated link to the proposed spec (the link from CharliePoole here #51 (comment) was dead): https://github.com/nunit/docs/wiki/Test-Dependency-Attribute-Spec

Now for a use case; consider the following toy Program and Associated Tests

namespace ExampleProgram
{
    using System.Collections;
    using NUnit.Framework;

    public static class ExampleClass
    {
        public static int Add(int a, int b)
        {
            return a - b;
        }

        public static int Increment(int a)
        {
            return Add(a, 1);
        }
    }

    public class ExampleClassTests
    {
        [TestCaseSource(typeof(AddTestCases))]
        public void Add_Tests(int a, int b, int expected)
        {
            int actual = ExampleClass.Add(a, b);
            Assert.That(actual, Is.EqualTo(expected));
        }

        [TestCaseSource(typeof(IncrementTestCases))]
        public void Increment_Tests(int a, int expected)
        {
            int actual = ExampleClass.Increment(a);
            Assert.That(actual, Is.EqualTo(expected));
        }
    }

    internal class IncrementTestCases : IEnumerable
    {
        public IEnumerator GetEnumerator()
        {
            yield return new TestCaseData(0, 1);
            yield return new TestCaseData(-1, 0);
            yield return new TestCaseData(1, 2);
        }
    }

    internal class AddTestCases : IEnumerable
    {
        public IEnumerator GetEnumerator()
        {
            yield return new TestCaseData(0, 0, 0);
            yield return new TestCaseData(0, 2, 2);
            yield return new TestCaseData(2, 0, 2);
            yield return new TestCaseData(1, 1, 2);
        }
    }
}

As an implementer I know that if any Unit Tests around Add(int,int) fail there is absolutely no point in running all the additional tests around Increment(int) other than noise. However there does not appear to be a way (short of Test Dependency) to specify this to NUnit (at least in my searches).

Doing a lot of research online it seems like others have worked around this by using a combination of factors (none of which are explicitly clear that Increment(int) depends on Add(int,int)) such ways include:

  • Using Categories
  • Using a Naming Convention To Control the Ordering of Tests

Neither of these seem to scale well, or even work for that matter when you use other features such as Parallel and all require some external "post processing" after the NUnit run has completed.

Is this the best path forward (if we were to use pure NUnit)? Is this feature still being worked on? (In other-words if a PR were submitted would it jam up anyone else working on something related?)

There is lots of good discussion in this thread about cyclic dependencies and other potential issues with this feature, it is obviously not easy to fix otherwise someone would have done it already. I am sure adding Parallel and TestCaseSource into the mix also increase complexity. I intend to dig more at some point, but before doing so wanted to make sure that this was not a solved problem or plans-in-the-works.

@CharliePoole
Copy link
Contributor

@aolszowka
This remains an accepted feature, at least as far as the issue labels go. @nunit/framework-team Am I correct there?

Nobody has assigned it to themselves, which is supposed to mean that nobody is working on it. Smart of you to ask, none the less! If you want to work on it, some team member will probably assign it to themselves and "keep an eye" on you, since GitHub won't let us assign issues to non-members.

I made this a feature and gave it its "normal" priority back when I was project lead. I intended to work on it "some day" but never did and never will now that I'm not active in the project. I'm glad to correspond with you over any issues you find if you take it on.

My advice is to NOT do what I tried to do: write a complete spec and then work toward it. As you can read in the comments, we kept finding things to disagree about in the spec and nobody ever moved it to implementation. AFAIK (or remember) the prerequisite work in how tests are dispatched has been done already. I would pick one of the three types of dependency (see my two+ years ago comment) and just one use case and work on it. We won't want to release something until we are sure the API is correct, so you should probably count on a long-running feature branch that has to be periodically rebased or merged from master. Big job!

@ChrisMaddock
Copy link
Member

This remains an accepted feature, at least as far as the issue labels go. @nunit/framework-team Am I correct there?

Yes - as far as I'm concerned!

@Shiney
Copy link
Contributor

Shiney commented Oct 16, 2023

Is anyone still interested in this being implemented? I might be interested in giving it a go?

@OsirisTerje
Copy link
Member

It is still open, so feel free to give it a shot :-)

@OsirisTerje
Copy link
Member

@Shiney Any possibility of doing this?

@Shiney
Copy link
Contributor

Shiney commented Jan 11, 2024

Not in the short term, I'm planning on doing some parental leave at some point so if I end up having some free time to keep my C# up to date and this isn't yet done I will probably do this, but I have no free time at the moment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests