Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve test coverage #3

Open
mathiasbynens opened this issue Nov 8, 2018 · 7 comments
Open

Improve test coverage #3

mathiasbynens opened this issue Nov 8, 2018 · 7 comments

Comments

@mathiasbynens
Copy link
Member

The current implementation has been tested quite well, but even more rigorous testing is always possible.

Should you find any bugs, please report them!

@dubzzz
Copy link

dubzzz commented Nov 9, 2018

Why not using property based testing on such lib (in addition to regular tests)? It applies quite well for such kind of lib

@hchiam
Copy link

hchiam commented Nov 11, 2018

@dubzzz
Copy link

dubzzz commented Nov 12, 2018

I already worked on such tests adapted for bigint. You might be interested in https://runkit.com/dubzzz/5b9f6183f459f700125eee17 (it checks big-integer npm package on multiple properties thanks to fast-check framework)

@dubzzz
Copy link

dubzzz commented Nov 13, 2018

@mathiasbynens I did write some basic property based tests, more: dubzzz@7c7f6d5

@jakobkummerow
Copy link
Collaborator

Thanks for these contributions, I appreciate the eagerness to help!

I must admit that I'm not a huge fan of this particular approach though. For example, testing that a & b === b & a and a | b === b | a is nice and well, but doesn't actually detect whether the operations are implemented correctly -- hypothetical example, if the implementation mixed up & and |, such a test would not detect it.

My understanding is that the big argument in favor of property-based testing is that it's a form of randomized testing, where an automated test generator potentially generates test cases that a human might not have thought of. The obvious difficulty of random-generated test cases is how to generate expected values, which is where the properties come into play: they replace specific expectations with general invariants. In case of BigInts, we have what I believe is an even better option: use another language's implementation to generate assumed-correct expectation values. That's what the existing generator does: it uses Python to compute c such that it can actually test that a & b === c, which is a stronger assertion, because it checks that & is actually producing correct results. I have a nearly finished patch to extend that script's abilities to generate faster-running correctness tests in addition to the performance-oriented (but also correctness-checking!) benchmarks it currently generates.

@mathiasbynens
Copy link
Member Author

I agree with @jakobkummerow; this is already sufficiently handled by the test generator.

@dubzzz
Copy link

dubzzz commented Nov 15, 2018

Hi @jakobkummerow,

Indeed, the option of testing against other languages is obviously a good thing in this precise example. I did not think of that possibility at the beginning.

Meanwhile you can still rely on "randomly" generated a and b to run against the two implementations and compare their respective outputs. So that you confirm they are consistent on a large range of possible inputs.

Thanks for your answers :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants