Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data-generated test cases (parameterized tests) #1454

Closed
bajtos opened this issue Nov 28, 2014 · 31 comments
Closed

Data-generated test cases (parameterized tests) #1454

bajtos opened this issue Nov 28, 2014 · 31 comments

Comments

@bajtos
Copy link

bajtos commented Nov 28, 2014

At the moment, generating multiple test cases that differ only by input data is rather cumbersome:

describe('prime number checker', function() {
  [2,3,5,7].forEach(function(value) {
    it('returns true for prime number ' + value, function() {
      expect(isPrime(value)).to.be.true();
    });

  [4,6,8,9].forEach(function(value) {
    it('returns false for composite number ' + value, function() {
      expect(isPrime(value)).to.be.false();
    });
});

I am proposing to extend the current API with syntactic sugar for data-generated test cases by adding an optional second parameter to both it and describe, this parameter will be an array of data points to test.

describe('prime number checker', function() {
  it('returns true for prime numbers', [2,3,5,7], function(data) {
    expect(isPrime(data)).to.be.true();
  });

  it('returns false for composite number', [4,6,8,9], function(data) {
    expect(isPrime(data)).to.be.false();
  });
});

More advanced example:

var SAMPLES = [
  { scheme: 'http', host: 'localhost', path: '/', url: 'http://localhost/' },
  { scheme: 'https', host: '127.0.0.1', path: '/test', url: 'https://127.0.0.1/test' }
];

describe('url helper', SAMPLES, function(data) {
  it('builds url from components', function() {
    var str = urlHelper.build({ scheme: data.scheme, host: data.host, path: data.path });
    expect(str).to.equal(data.url);
  });

  it('parses url into components', function() {
    var components = urlHelper.parse(data.url);
    expect(components).to.eql({ scheme: data.scheme, host: data.host, path: data.path });
  });
});

I am happy to contribute the implementation.

The purpose of this issue is to discuss necessary details and to get maintainers' approval before I start writing code.

Related to #57.

/cc @mcollina

@travisjeffery
Copy link
Contributor

describe('prime number checker', function() {
  [2,3,5,7].forEach(function(value) {
    it('returns true for prime number ' + value, function() {
      expect(isPrime(value)).to.be.true();
    });
});

this should be done like imo:

describe('prime number checker', function() {
    it('should return true for prime numbers', function() {
      [2,3,5,7].forEach(function(value){
        expect(isPrime(value)).to.be.true();  
      });
    });
});

and it seems that chai could add, might even already have, a matcher that would take an array and apply a fn against it - in this case isPrime.

thanks for the issue but i'm gonna close. i don't think an api for this should be in mocha, should be in chai et al. if anything.

@bajtos
Copy link
Author

bajtos commented Nov 28, 2014

With all my respect, the proposed solution sucks.

First of all, the error message will be "expected false to be true", which is absolutely unhelpful. It can be worked around via expect(isPrime(value), value).to.be.true(), but that's even more boilerplate code.

Secondly, the first failure will stop the test case, therefore you won't know whether it is only one data point failing the test, or whether there are more of them. I.e. if 3 fails, 5 and 7 are not run at all.

Is there a way how to implement this feature as a mocha plugin?

@bajtos bajtos changed the title Data-generated test cases Data-generated test cases (parameterized tests) Nov 28, 2014
@bajtos
Copy link
Author

bajtos commented Nov 28, 2014

Also note that parameterized tests are a common feature of unit-testing frameworks.

  • .NET's NUnit supports parameterized tests since v2.5 released in 2009
  • So does Java's JUnit (docs)
  • Ruby's rspec has plugins adding support for parameterized tests (param_test, rspec-parameterized)

@bajtos
Copy link
Author

bajtos commented Nov 28, 2014

and it seems that chai could add, might even already have, a matcher that would take an array and apply a fn against it - in this case isPrime.

This may help in simple examples like I described above, but it becomes cumbersome as the test method grows in length.

Example:

it('should return custom 404 response', 
  ['/Products/unknown-id', '/unknown-root', '/images/unknown.jpg'],
  function(url, done) {
    supertest(app).get(url)
      .expect(404)
      .expect('Content-Type', /html/)
      .end(function(err, res) {
        if (err) return done(err);
        expect(res.body).to.contain('<title>Lost your way?</title>');
        done();
      });
  }
);

@bajtos
Copy link
Author

bajtos commented Nov 28, 2014

First of all, the error message will be "expected false to be true", which is absolutely unhelpful. It can be worked around via expect(isPrime(value), value).to.be.true(), but that's even more boilerplate code.

Let's me rephrase that.

The error message "expected false to be true" is very unhelpful, as it is lacking the context - what value was being tested? This can be worked around via expect(isPrime(value), value).to.be.true(), but that's even more boilerplate code to write. And let's face it, most developers won't bother with that. My solution is providing a Pit of Success, where the natural usage of Mocha's API provides usable error messages.

However, it's the second argument (the first failure aborts the test case) which is most important. And again, most developers are not aware how important it is to get this right and will happily do things like

it('returns true for prime numbers', function() {
  expect(isPrime(2)).to.be.true();
  expect(isPrime(3)).to.be.true();
});

While my proposed solution will not prevent them from doing that, it will at least make it super easy to fix their test code to do the right thing:

it('returns true for prime numbers', [2, 3], function(num) {
  expect(isPrime(num)).to.be.true();
});

@travisjeffery could you please elaborate more on why you are rejecting my idea and perhaps offer an advice on how I can get this feature without forking mocha, e.g. by modifying mocha to allow plugins to customize it and describe?

@dasilvacontin
Copy link
Contributor

Good points there @bajtos.

I'm not sure I dig adding optional parameters to the it and describe functions though. Once we do that, we pretty much kill any suggestions of adding any other optional parameters, like the tags that are being suggested in a different thread. (not like I'm into adding those either..)

Some other options:

var SAMPLES = [
  { scheme: 'http', host: 'localhost', path: '/', url: 'http://localhost/' },
  { scheme: 'https', host: '127.0.0.1', path: '/test', url: 'https://127.0.0.1/test' }
];

describe('url helper', function() {
  it('builds url from components', {using:SAMPLES, tags: [tag1, tag2, tag3]}, function(data) {
    var str = urlHelper.build({ scheme: data.scheme, host: data.host, path: data.path });
    expect(str).to.equal(data.url);
  });
  using(SAMPLES).it('builds url from components', function(data) {
    var str = urlHelper.build({ scheme: data.scheme, host: data.host, path: data.path });
    expect(str).to.equal(data.url);
  });
});

@boneskull
Copy link
Member

@dasilvacontin Yeah, I was thinking about the tags more. I don't think adding more parameters is the answer to that one, but in a nutshell a syntax like:

this.tags = ['foo', 'bar', 'baz'];

Is much more palatable.

Anyway, regarding this:

  it('returns true for prime numbers', [2,3,5,7], function(data) {
    expect(isPrime(data)).to.be.true();
  });

What's the proposal for handling async tests in this manner? I'd imagine "more parameters".

describe('prime number checker', function() {
  [2,3,5,7].forEach(function(value) {
    it('returns true for prime number ' + value, function() {
      expect(isPrime(value)).to.be.true();
    });

  [4,6,8,9].forEach(function(value) {
    it('returns false for composite number ' + value, function() {
      expect(isPrime(value)).to.be.false();
    });
});

The above seems pretty similar to param-test, though I don't read Ruby.

@bajtos I'm not sure if it's possible to write a "plugin" that does this, but the correct way would be an interface that likely "extends" the BDD interface. If you can supply a 3p interface on the command-line (I can't recall if you can), then you can use whatever you come up with. If you can't supply a 3p interface, then you should be able to, which would make a great PR.

@travisjeffery
Copy link
Contributor

This may help in simple examples like I described above, but it becomes cumbersome as the test method grows in length.

it('should return custom 404 response', 
  ['/Products/unknown-id', '/unknown-root', '/images/unknown.jpg'],
  function(url, done) {
    supertest(app).get(url)
      .expect(404)
      .expect('Content-Type', /html/)
      .end(function(err, res) {
        if (err) return done(err);
        expect(res.body).to.contain('<title>Lost your way?</title>');
        done();
      });
  }
);

the issue here is that you have a huge fn defined in the midst of your test. just create and use a named fn - e.g. assertUnknown, which would also help with clarity and readability, and it's not a problem.

you could write your own interface that supports an array parameter.

@boneskull
Copy link
Member

In a nutshell: it's not essential for Mocha core.

@bajtos
Copy link
Author

bajtos commented Nov 29, 2014

@dasilvacontin I like the using() API you have proposed:

using(SAMPLES).it('builds url from components', function(data) {

What's the proposal for handling async tests in this manner? I'd imagine "more parameters".

See the supertest example in my comments above: when the test case is parameterized, async test function takes two parameters.

it('should return custom 404 response', 
  ['/Products/unknown-id', '/unknown-root', '/images/unknown.jpg'],
  function(url, done) {
    // etc.
  });

@travisjeffery

the issue here is that you have a huge fn defined in the midst of your test. just create and use a named fn - e.g. assertUnknown, which would also help with clarity and readability, and it's not a problem.

In my experience, it is quite common that test functions have 5-10 lines of code. There are two ways how to clean the test code - either extract a shared function, or write a parameterized test. Both approaches are valid.

Your proposal involves too much unnecessary repeated boilerplate.

it('should return custom 404 response for /Products/unknown-id'', function(done) {
  assertUnknown('/Products/unknown-id', done);
});

it('should return custom 404 response for /unknown-root'', function(done) {
  assertUnknown('/Products/unknown-root', done);
});

it('should return custom 404 response for /images/unknown.jpg'', function(done) {
  assertUnknown('/images/unknown.jpg', done);
});

Compare it with what I am proposing:

it('should return custom 404 response', 
  ['/Products/unknown-id', '/unknown-root', '/images/unknown.jpg'],
  assertUnknown);

// or perhaps
it('should return custom 404 response', 
  ['/Products/unknown-id', '/unknown-root', '/images/unknown.jpg'],
  function(url, done) {
    assertUnknown(url, done);
  });

Anyways, your proposal to write a custom reporter is a reasonable workaround.


Honestly, I am getting tired from fighting what I see as bad decisions on mocha's side and I will most likely find another test framework that is closer to my mindset and contribute there. Here are some of the other issues I have with Mocha in case you were interested: #1218, #1401, #1065 .

@boneskull
Copy link
Member

@bajtos

Anyways, your proposal to write a custom reporter is a reasonable workaround.

It's an interface, not a reporter. See bdd.js for a starting place.

Honestly, I am getting tired from fighting what I see as bad decisions on mocha's side

What decisions? Feel free to email me, or join us in the Mocha Slack room to discuss (email Travis if you would like to join Slack), if you don't wish to do so here. I can't promise any resolutions, but statements like this beg more information.

#1218, #1401, #1065

None of these tickets are closed--they are pending review and/or action; nothing has really been "decided" here.

@bajtos
Copy link
Author

bajtos commented Nov 29, 2014

It's an interface, not a reporter. See bdd.js for a starting place.

That was a typo on my side, thanks for correcting me and adding a link to source file.

What decisions?

Well, maybe "decision" was not the right word. My impression is that issues that are important to certain subset of mocha users like me are not given the same importance by mocha maintainers.

None of these tickets are closed--they are pending review and/or action; nothing has really been "decided" here.

Ad #1065: I submitted a pull request #949 first. Since it was rejected, I created an issue to discuss what is the best way of addressing the problem. The issue has been opened for a year without any comment from project maintainers. In the meantime, a partial solution was landed via #993.

Now I know very well (from my own experience) that you can't comment on all issues. However, I would appreciate if you could at least comment on issues that are following up on rejected pull requests, as such issues are likely to turn into another contribution (pull request).

Ad #1218: This is super annoying for anybody who is using Jenkins JUnit integration, as any console.log and/or console.error screws up the XML output. There were at least three pull requests trying to fix this issue, dating back as far as to Jun 13, 2013 (!!). From my point of view, mocha maintainers don't consider this issue important enough to get the fix landed. At StrongLoop, we ended up maintaining our own fork of Mocha just because of this issue.

Ad #1401 - I see that you are going to merge this one soon, @boneskull, thank you for that.

As I was thinking more about my feelings about mocha, I realised it may be the communication style I find most off-putting. When I try to scratch my mocha-related itch and contribute a patch, I usually end-up with an nonconstructive rejection that does not give me any alternative solution for fixing my problem. This creates an impression that mocha maintainers don't understand mocha users and the real-world issues they are facing.

@mcollina
Copy link

I share @bajtos feelings about mocha. For a long time, mocha served me really well, but at some point I started disagreeing on the project direction. It is barely usable now from my side, mostly related to #1401. I usually write tools/modules, but it seems that mocha is mostly focusing on applications now.

I think I have more than 10k tests to maintain written in mocha, and I'm starting to think I should care more about this project, but it does not feel a welcoming community.

I wrote my own thing at least 10 times for this particular issue, but here is the less ugly one https://github.com/nearform/nscale-planner/blob/master/test/integration/integration.js. IMHO it's superior to all the other proposals, but it's my taste. I also think it can be supported here:

var custom = it.gen("should be generic with", function(a, b) {
  //....
})

custom(1) // the test is reported as "should be generic with 1".
custom.skip(2)
custom.only("a string", 3) // the test is reported as "should be generic with a string".

Also, it can be applied to describe, to simplify the handling of abstract tests groups. I can work on a PR if you want to.

@boneskull
Copy link
Member

@mcollina @bajtos Can you share some examples of where you've felt unwelcome?

I usually write tools/modules, but it seems that mocha is mostly focusing on applications now.

@mcollina What makes you say that?

The issue has been opened for a year without any comment from project maintainers. In the meantime, a partial solution was landed via #993.

@bajtos If a critical issue (to you) has lingered for a long time, it's probably because we don't understand how many people it affects, and/or don't fully understand its severity. If something gets stale, it (often) gets closed. If something keeps getting bumped, it typically gets more attention, but that's not always the case (explanation follows).

I'm going to ramble for a bit.


There are two (2) active maintainers of this project; @travisjeffery and I. After TJ left, there was an influx of new collaborators, but many have dropped out. If either of us are terse when responding to issues, it's about all we can muster.

I have maybe three (3) hours each week to devote to this project. Most of that time is taken responding to issues. At present time, my TO-DO list for Mocha is 158 items long, and I don't have a great idea about where to start if I do get a chance to implement something.

Because we are so thin, we can't really afford to merge many features. Eventually those features will create more bugs. And we don't have time for more bugs. So, the priority right now for us is to ensure critical issues are addressed. But unless there's a flurry of activity on an issue, we won't necessarily know how critical it is.

So, please excuse our reluctance to merge feature PRs. If there's any viable workaround, that's pretty much what's necessary right now.

There are two main problems here, as I see it:

  1. We do not have enough resources to add much to Mocha, and
  2. We probably have not done a great job of communicating that.

To address 1., we need more help. Presently we need help managing issues, reviewing PRs, and bringing critical issues to attention. This person or person(s) would have to actually enjoy "project management", or else they won't last long. If they can code, that's great too. This potential collaborator would need to understand that resources are very limited.

I'm thinking maybe a splash on the README and post to the Google Group announcing we're looking for help in this area. Any other ideas?

Regarding 2.: I did add CONTRIBUTING.md recently to explain a bit about the project's status, but I think that was insufficient, or maybe not worded well.

Perhaps when closing PRs, we can be more courteous and reiterate the current needs of the project. Instead of a public declaration, individual attention would probably alleviate some pain.


Users get upset when you don't merge their PRs. I think contributing to this is that users do not have a clear, documented way to develop a plugin, so they feel like the only way that their feature will ever see the light of day is if it gets merged.

If a well-documented plugin API was published, then users could write all the weird features they want for Mocha, and keep them out of the core. Nobody gets hurt, everybody's happy.

I think this is a great idea, but where to prioritize it next to the 158 other things?

Any further comments or suggestions would be appreciated.

@mcollina
Copy link

I usually write tools/modules, but it seems that mocha is mostly focusing on applications now.

@mcollina What makes you say that?

@boneskull it's a feeling from my side of things. Every time I am writing a feature that is hard (like most of LevelGraph, nscale, Mosca, MQTT.js), I have issues with Mocha so that it is not simple, flexible and fun (as in the tagline) anymore. Every time I test an application with Mocha I feel "this is so great", and I recommend it in courses/etc.

It is either missing a feature that I need, and I see no simple and easy way I can publish that as a module, or it has some annoying bug (like #1401), or I have to resort to ugly and non-reusable hacks to make things working.

Can you share some examples of where you've felt unwelcome?

You wrote that yourself, in more in-depth way than I ever could

Users get upset when you don't merge their PRs. I think contributing to this is that users do not have a clear, documented way to develop a plugin, so they feel like the only way that their feature will ever see the light of day is if it gets merged.
If a well-documented plugin API was published, then users could write all the weird features they want for Mocha, and keep them out of the core. Nobody gets hurt, everybody's happy.

You state "we will not likely accept your PR" in the CONTRIBUTING.md file, but there is no easy way for people to develop the feature they need with a plugin. This makes people feel unwelcome.


Funding oss is hard, and I understand your problems, as these are often mine. However, so many companies and developers have interests in that mocha keeps going well, and I believe finding new contributors is possible. I think that saying it loud "we need help" is definitely important.

I will try to help my best.

Side note: you might consider applying for GSoC as a org, and hopefully let Google sponsor a young fellow developer to build and document that plugin API.

@bajtos
Copy link
Author

bajtos commented Dec 1, 2014

@boneskull Thank you for sharing this. I did not realise how strained you are on time. In hindsight, I should have assumed from start that the lack of time was the reason for terse comments, instead of expecting some sort of intentional malice. I apologise for that.

Because we are so thin, we can't really afford to merge many features. Eventually those features will create more bugs. And we don't have time for more bugs. So, the priority right now for us is to ensure critical issues are addressed. But unless there's a flurry of activity on an issue, we won't necessarily know how critical it is.

I totally agree, now that I understand your situation.

Users get upset when you don't merge their PRs. I think contributing to this is that users do not have a clear, documented way to develop a plugin, so they feel like the only way that their feature will ever see the light of day is if it gets merged.

If a well-documented plugin API was published, then users could write all the weird features they want for Mocha, and keep them out of the core. Nobody gets hurt, everybody's happy.

I think this is a great idea, but where to prioritize it next to the 158 other things?

Perhaps you can start encouraging people to write their features as plugins using the existing APIs, even though it's are not great, and let an official plugin API emerge from that work?

I sent a PR adding a note about plugins to CONTRIBUTING.md: #1459

Perhaps when closing PRs, we can be more courteous and reiterate the current needs of the project. Instead of a public declaration, individual attention would probably alleviate some pain.

+1000 for that.

IMO even a generic comment that you copy & paste every time you are rejecting an issue/a pull request would help a lot. Something along the lines:

We appreciate your contribution. However, given how little time we have to maintain the project, we are accepting only critical bug fixes and absolutely essential features. We fully understand this is not great and you may be unhappy that your problem was not solved. Here are few alternative ways how to get what you need: 1) rewrite the feature as a mocha plugin 2) if you are fixing a bug in a non-essential component, consider extracting the component into a standalone plugin and fix the bug there 3) help us with triaging and managing issues so that we have more time left to work on the code.

I'm thinking maybe a splash on the README and post to the Google Group announcing we're looking for help in this area. Any other ideas?

Add a banner to the website (http://mochajs.org/). Make sure the plea includes the information from your comment (Presently we need help managing issues, reviewing PRs, and bringing critical issues to attention. This person or person(s) would have to actually enjoy "project management", or else they won't last long. If they can code, that's great too.)

Frankly, I have other OSS projects where I have hard time keeping up with issues and pull requests, so unfortunately I can't offer much more help, even though I would like to :(

@hellboy81
Copy link

TL;DR: how can I use parametrized tests?

The only solution I found is using async.each

@danielstjules
Copy link
Contributor

Something like this can work :)

var assert = require('assert');

describe('suite', function() {
  [1, 2, 3].forEach(function(n) {
    it('correctly handles ' + n, function() {
      assert(n);
    });
  });
});

// =>

  suite
     correctly handles 1
     correctly handles 2
     correctly handles 3


  3 passing (7ms)

@hellboy81
Copy link

Very important:

  • describe can not be paramterized due on problems
    • reason?
  • only it

@dasilvacontin
Copy link
Contributor

describe can not be paramterized due on problems

What problems?

[1, 2, 3, 4].forEach(function (val) {
  describe('test ' + val, function () {
    it('bla', function () {

    })
  })
})
➜  js  mocha describe-parameterized.js


  test 1
    ✓ bla

  test 2
    ✓ bla

  test 3
    ✓ bla

  test 4
    ✓ bla


  4 passing (8ms)

@bananu7
Copy link

bananu7 commented Jan 7, 2016

I agree with @dasilvacontin, this is the approach we use and it works reasonably well. Not sure if polluting the mocha's API with parametrization is necessary at all TBH.

@btelles
Copy link

btelles commented Mar 25, 2016

FWIW, the @dasilvacontin 's solution only appears to work, but thte val value isn't changed in the it blocks:

[1, 2, 3, 4].forEach(function (val) {
  describe('test ' + val, function () {
    it('bla', function () {
      console.log(val);
    })
  })
})

test 1
  ✓ bla
  4
test 2
  ✓ bla
  4
test 3
  ✓ bla
  4
test 4
  ✓ bla
  4

@danielstjules
Copy link
Contributor

@btelles What version of mocha are you using? Cause it works for me.

$ cat test.js
[1, 2, 3, 4].forEach(function (val) {
  describe('test ' + val, function () {
    it('bla', function () {
      console.log(val);
    })
  })
})
dstjules:~/Desktop
$ mocha --version
2.4.5
dstjules:~/Desktop
$ mocha test.js


  test 1
1
     bla

  test 2
2
     bla

  test 3
3
     bla

  test 4
4
     bla


  4 passing (10ms)

@dasilvacontin
Copy link
Contributor

@danielstjules Works here as well.

@btelles is that really the code you are using? From the output it looks as if you were referencing an iterator variable – once the its are actually executed, the loop has finished and the iterator points/has the limit value.

@lawrencec
Copy link

FWIW, There's a module available I wrote called Unroll which provide parameterized tests like the above but with named parameters which are also reflected in the test title. Useful if you want the test titles to be informative with regard to the parameters.

robertknight added a commit to hypothesis/h that referenced this issue Apr 18, 2016
Mocha lacks built-in support [1] for writing parameterized tests and the
suggested solution [2] involves a bunch of boilerplate* which has IMO
resulted in different styles of parameterized tests in our codebase and
not having parameterized tests when they would be useful to attain more
complete coverage.

This adds a helper inspired by [3] for writing parameterized tests and
switches several existing places in our code to use it.

* Though less with ES2015 syntax.

[1] mochajs/mocha#1454
[2] https://mochajs.org/#dynamically-generating-tests
[3] https://github.com/lawrencec/Unroll
robertknight added a commit to hypothesis/h that referenced this issue Apr 18, 2016
Mocha lacks built-in support [1] for writing parameterized tests and the
suggested solution [2] involves a bunch of boilerplate* which has IMO
resulted in different styles of parameterized tests in our codebase and
not having parameterized tests when they would be useful to attain more
complete coverage.

This adds a helper inspired by [3] for writing parameterized tests and
switches several existing places in our code to use it.

* Though less with ES2015 syntax.

[1] mochajs/mocha#1454
[2] https://mochajs.org/#dynamically-generating-tests
[3] https://github.com/lawrencec/Unroll
nickstenning pushed a commit to hypothesis/browser-extension that referenced this issue Jul 8, 2016
Mocha lacks built-in support [1] for writing parameterized tests and the
suggested solution [2] involves a bunch of boilerplate* which has IMO
resulted in different styles of parameterized tests in our codebase and
not having parameterized tests when they would be useful to attain more
complete coverage.

This adds a helper inspired by [3] for writing parameterized tests and
switches several existing places in our code to use it.

* Though less with ES2015 syntax.

[1] mochajs/mocha#1454
[2] https://mochajs.org/#dynamically-generating-tests
[3] https://github.com/lawrencec/Unroll
@mikejsdev
Copy link

mikejsdev commented Mar 5, 2017

I know this is old but there is now a really simple npm package to make this easier: mocha-param
screen shot 2017-03-05 at 23 45 16

@binduwavell
Copy link

@mikejsdev I don't understand the reasoning behind mocha-param when we can use: https://mochajs.org/#dynamically-generating-tests. The latter allows for unique names for each test, which is pretty huge. Presumably we can use this same dynamic technique to stamp out multiple describes too (although I have not tried that.) Can you help me understand the use cases where mocha-param is preferable to dynamically generating tests?

@timaschew
Copy link

I agree with @binduwavell
And this has also been approved earlier by @dasilvacontin and @danielstjules and also by me now with mocha 3.2.0

At all people who writing modules and publishing them like @mikejsdev
Please add a note about the motivation in your README:

  • Why did you wrote the module?
  • Which problems do you try to solve?
  • What are you doing differently compared with X?

@josh-cain
Copy link

Hate to dig this one back up, but I wonder if being able to specify parameters in such a way that the mocha context is aware of them could improve support for use of beforeEach/afterEach hooks. I just stumbled across #4072 and it seems somewhat relevant here 🤷‍♂

@fasttime
Copy link

Any way to do something like it.only and it.skip with (one particular instance of) a parameterized test? For me, that would be the only reason for using a helper package rather than dynamically generating tests.

@Overdrivr
Copy link

@binduwavell @timaschew

Just my two cents, but coming from the Python world the syntax proposed in https://mochajs.org/#dynamically-generating-tests looks very poor to me:

  1. The parameters must be retrieved through the test variable while in fact they should just be function parameters
  2. Each tests entry must contain an object with key:value for each parameter. This makes the initial parametric data way more verbose than it should.
  3. The test function itself (the one passed to it(...) has zero parameters which is not very explicit and readable
describe('add()', function() {
  var tests = [
   // See 2., extra key and values just for re-defining function parameters.
    {args: [1, 2], expected: 3},
    {args: [1, 2, 3], expected: 6},
    {args: [1, 2, 3, 4], expected: 10}
  ];

  tests.forEach(function(test) {
    it('correctly adds ' + test.args.length + ' args', 
      // See 3., no parameters to this function. Except it does have some
      function() {
      // See 1., cannot directly use `args` as a variable, need to go through `test.args`
      var res = add.apply(null, test.args);
      assert.equal(res, test.expected);
    });
  });
});

An (hypothetical) syntax that I would find superior would be:

describe('add()', function() {
  var tests = [
    // values, expected
    [[1, 2],  3],
    [[1, 2, 3],  6],
    [[1, 2, 3, 4], 10]
  ];

  using(tests).it('correctly adds with args', function(values, expected) {
      var res = add.apply(null, values);
      assert.equal(res, expected);
    });
  });
});

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests