Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code coverage CSV includes some comment lines in "lines covered" #342

Open
whastings opened this issue Nov 24, 2014 · 6 comments
Open

Code coverage CSV includes some comment lines in "lines covered" #342

whastings opened this issue Nov 24, 2014 · 6 comments

Comments

@whastings
Copy link

I have noticed that when I generate a coverage.csv from a spec run, Venus seems to include lines that are comments in its "lines covered", except for the docblock comment at the beginning of the file. I think Venus should either include all comments in "lines covered" or none.

Example/How to Reproduce:

JS File:

/** Some docblock comment stuff:
 *
 * Turnip greens yarrow ricebean rutabaga endive cauliflower sea lettuce
 * kohlrabi amaranth water spinach avocado daikon napa cabbage asparagus winter
 * purslane kale. Celery potato scallion desert raisin horseradish spinach
 * carrot soko. Lotus root water spinach fennel kombu maize bamboo shoot green
 * bean swiss chard seakale pumpkin onion chickpea gram corn pea. Brussels
 * sprout coriander water chestnut gourd swiss chard wakame kohlrabi beetroot
 * carrot watercress. Corn amaranth salsify bunya nuts nori azuki bean
 * chickweed potato bell pepper artichoke.
 *
 */
var testThing = {
  /**
   * Adds two numbers.
   */
  add: function(num1, num2) {
    return num1 + num2;
  },

  /**
   * Subtracts num2 from num1.
   */
  subtract: function(num1, num2) {
    return num1 - num2;
  },

  /**
   * Multiplies two numbers.
   */
  multiply: function(num1, num2) {
    return num1 * num2;
  }
};

Spec File:

/**
 * @venus-library mocha
 * @venus-code test.js
 */

describe('testThing', function() {
  describe('.add()', function() {
    it('should add numbers', function() {
      expect(testThing.add(1, 1)).to.be(2);
    });
  });

  describe('.multiply()', function() {
    it('should multiply numbers', function() {
      expect(testThing.multiply(2, 2)).to.be(4);
    });
  });

  describe('.subtract()', function() {
    it('should do subtraction', function() {
      expect(testThing.subtract(5, 3)).to.be(2);
    });
  });
});

Coverage CSV Report:

  • total lines: 34
  • code coverage: 0.65
  • lines covered: 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
  • lines not covered: 1 2 3 4 5 6 7 8 9 10 11 12

Of particular concern is the report of code coverage as 65%, when its clearly 100%, which the command line output from Venus reflects.

Does anyone have any idea what is causing this and what we might do to address it?

@sethmcl
Copy link
Contributor

sethmcl commented Dec 2, 2014

I took a look and confirmed that we are not ignoring comments which is skewing the data. For a quick fix, I could just print out the statement/function/branch coverage metrics (the same thing that we show in the terminal). Would this solve your immediate needs?

Something like:

source file,statement coverage,function coverage,branch coverage
/foo.js,.85,1,1

@derekbrown
Copy link

Seth:

Thanks for looking into this. So I've written a [hacky...you know my code :) ] Python script that parses both Venus' output coverage.csv, and the .js files within a given folder structure, ignoring the comments. Comparing the numbers gives us the percentage of the entire codebase covered by our suite of tests. This is fairly helpful in terms of overall metrics and knowing how covered our various products are. The immediate problem is that Venus' line parsing doesn't match my script's line parsing, because of the aforementioned comment block issue.

Thoughts?

@sethmcl
Copy link
Contributor

sethmcl commented Dec 4, 2014

Thanks for the additional information. I just put up a pull request (#344) which greatly improves the code coverage support in venus. Try it out and let me know what you think :)

@whastings
Copy link
Author

Thanks Seth! I'll check it out ASAP, hopefully early next week, and see if it'll get us what we need.

Derek, as a follow up, if all the reporting still divides the data into lines, functions, and branches, it would help if we could get your script to measure these data points as well, since it needs to parse files that aren't covered by any tests to get an entire directory tree's aggregate code coverage data. I'm guessing Istanbul uses Esprima to do this. Is there any easy way to call this or a similar tool from the Python script?

@derekbrown
Copy link

@sethmcl: Fantastic as always. :)

@whastings: My script is certainly not meant to replicate Venus'/Istanbul's measurements, but to provide a stop-gap benchmark until a better, more sustainable system is found for our particular needs (namely in measuring coverage against an entire codebase). We can discuss offline from this issue if necessary. Feel free to shoot me an email.

@whastings
Copy link
Author

@sethmcl: The new Coverage Summary at the end of the CLI output and the HTML report file are just what I needed. Thanks for turning them on! I don't know how you want to address the coverage.csv issue, but I've got what I needed for now.

@derekbrown: Sure, I'll follow up via email.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants