New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] parallelise tests using unittest2 #332
base: master
Are you sure you want to change the base?
Conversation
I believe the bug is in |
It was: I made the |
- threadpool.setMinPoolSize(2) to avoid AppVeyor OOM - jsonTest: clear the global "status" table after we're done with it - try to reduce maximum memory usage - tests: switch to debug builds - 32-bit Windows: set the IMAGE_FILE_LARGE_ADDRESS_AWARE flag so we can use PAE, if enabled, and access more than 2 GiB of RAM
b3f2d0a
to
27e3162
Compare
This is #332 with parallelism disabled, while some threadpool deadlock is being debugged.
Times have changed - now the testsuite is huge, and takes longer to run than compiling. The original patch won't apply any more, but the idea is welcome. Parallel tests would be great! (Mentioning it here because the PR is still open. Might as well use it or close it.) If there is an interest in bringing this change or another like it up to date, the comments below which discuss other things we'd like may be relevant (from #683 (comment)):
|
Unfortunately, threadpool still deadlocks occasionally. |
the (easy) way to run tests in parallel is to run them as separate processes - it doesn't require anything of the testing framework and makes sure things are nicely isolated - ie this is specially important because we don't use threading a lot and some of the libraries use globals and other thread-incompatible constructs that don't get tested a lot. That said, transitioning to unittest2 should be done regardless - its biggest advantage is that it puts each test in a separate Finally, @jangko was looking into mocha - writing a mocha collector that outputs the json (without using json-serialization) would also be done in unittest2. |
I agree. There are some downsides to performance but it's much better isolation, and things that might use globals, threadvars, timers, etc. need that isolation for the tests to mean anything. (And some tests would want to launch their own threads - e.g. threadpool tests). Processes also solve the stdout/stderr-are-not-per-thread problem. On unix Windows doesn't have Between those two, I think there's merit in using a "processpool" to allow related tests to run in the same process sequentially, for tests we're reasonably confident are fine with that. |
Most of Nimbus-eth1 tests already use unittest2. Only these are still on unittest:
|
If it's using unittest2, and the headline feature is parallelism, why don't I see any? It's 1 core all the way. |
it's not, any more - it has to be enabled, and is not working reliably - there are other features in there however that are significant, in particular |
the good
time build/all_tests
:Relevant gains, right? Too bad that most of the time is taken by compilation, not the actual test running.
rm -rf nimcache; time make test
:Why the 23s difference when only test suites imported in "all_tests.nim" where parallelised and the runtime difference there is a little under 3s? My guess is that some changes in "test_helpers.nim" made the code faster to compile.
the bad
There's an occasional deadlock in unittest2's wrapper for threadpool's sync(). I hope it will be solved by the two commits I made to Nim's devel branch, but I need them to reach a public release before I can refactor unittest2 to depend on those fixes.
Until that's sorted out, this PR is in limbo.
the ugly
Adding one extra proc to the general state test runner was enough to bring back VM-related call-stack overflow issues. It's unfortunate that we're regressing on 17 JSON test files, but it's good to know that the workarounds in place were not something we could rely on in the long term.
I wasn't able to parallelise "test_rpc.nim" and "test_rpc_whisper.nim", due to a strange
'yield' only allowed in an iterator
error whenawait
appeared inside a test. Maybe it's something simple like an error inside compiler checks. Maybe it's something complex emerging from mixing async and threads.the weird
After changing thejsonTest()
logic, more test results appear in the corresponding *.md log files, and it's not just the newly added explicit skips. Hundreds of new, successfully run JSON test fixtures appear in those logs. I think it was a logic bug preventing this logging that was inadvertently fixed by changes meant to ensure GC-safety inside tests.