Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

performance mystery of jest --runInBand #46

Open
zhenyulin opened this issue Sep 6, 2018 · 10 comments
Open

performance mystery of jest --runInBand #46

zhenyulin opened this issue Sep 6, 2018 · 10 comments

Comments

@zhenyulin
Copy link

Currently using jest-runner-eslint to lint src before test in TDD and was trying to speed up the lint process, while controversially I found jest --runInBand delivers better linting performance than linting multiple files in parallel by jest.

So this brings me the question, what is making jest --runInBand actually faster to run jest-runner-eslint?

Some stats:

time npx jest --runInBand:

Test Suites: 15 passed, 15 total
Tests:       15 passed, 15 total
Snapshots:   0 total
Time:        2.374s, estimated 3s
Ran all test suites.
npx jest --runInBand  3.08s user 0.35s system 113% cpu 3.027 total

time npx jest:

Test Suites: 15 passed, 15 total
Tests:       15 passed, 15 total
Snapshots:   0 total
Time:        2.749s, estimated 3s
Ran all test suites.
npx jest  17.84s user 1.69s system 579% cpu 3.368 total

(averagely, it is 10% slower than jest --runInBand)

time npx eslint src:

npx eslint src  2.33s user 0.19s system 115% cpu 2.184 total

(faster)

time npx eslint_d src:

eslint_d src  0.09s user 0.02s system 20% cpu 0.541 total

(super fast when run for not first time)

It would be really nice to support eslint_d to speed up the lint process and get more immediate feedback in TDD.

@ljharb
Copy link
Collaborator

ljharb commented Sep 7, 2018

How many cores do you have? In my experience jest requires 4+ to be able to parallelize effectively.

@zhenyulin
Copy link
Author

@ljharb I'm running on a 2.3 GHz Intel Core i7 : )

In the stats above, running jest is using 579% CPU but achieve a slower performance, that really creates a myth for me.

@ljharb ljharb changed the title performance myth of jest --runInBand performance mystery of jest --runInBand Sep 7, 2018
@SimenB
Copy link
Member

SimenB commented Sep 15, 2018

It might be a good idea to try to cache the cli instationation based on the config passed (you should get name from it, which is unique):

const { CLIEngine } = getLocalESLint(config);
const options = getESLintOptions(config);
const cli = new CLIEngine(options.cliOptions);

It might be that simply spinning up a CLIEngine for every single file linted is adding too much overhead.

Not that it should impact runInBand vs not, but still. My guess is that too few files are linted, and the overhead of spawning processes is bigger than the gain of parallelization.

@raphael22
Copy link

What kind of black magic is this ?
Multithread slower than Monothread ?

@rogeliog
Copy link
Member

rogeliog commented Dec 2, 2019

I think we should be able to memoize the CLIEngine instance.

@rodrigoehlers
Copy link

A lot of time has passed since this issue was opened. Has there been any progress on this? It is a very interesting issue.

Jaid added a commit to Jaid/webpack-config-jaid that referenced this issue May 28, 2021
@rdsedmundo
Copy link
Contributor

Still seeing this, almost 3 years on.

--runInBand cut my test time from 1m10s to 35s. For reference, the official eslint CLI spends around 32s.

@Havunen
Copy link

Havunen commented Dec 27, 2022

I'm experiencing the same issue and it seems to be because the slowest thing is the module resolution process its done multiple times when not using --runInBand. I will try to investigate if module resolution could be made faster using some configs or somethings

@somewhatabstract
Copy link

somewhatabstract commented Aug 29, 2023

There are two things that come to mind regarding a slowdown in parallelised execution like this:

  1. Code that is run more (like the FlatESLint check run when runESLint.js is imported for each worker - though that doesn't seem anywhere near enough to cause the large slowdown that is noted here)
  2. Resource contention, like disk access, CPU context switching, multiple processes waiting for exclusive access to the same thing and blocking each other.

(2) feels like the likely culprit in this case, but what resource would the workers be contentious on every time they run. Looking at the runESLint function, they will all want to read the files referenced by config.setupTestFrameworkScriptFile and config.setupFilesAfterEnv and whatever they contain - but I doubt it's that.

It feels like it is more likely down to the ESLint implementation itself - it is not expecting to run in parallel. Each parallel instance is going to be loading the same config, and the same rules, for example. It may also be sharing the same cache for any ASTs (abstract syntax trees) it generates. Maybe that is creating a situation where one worker has to wait for another worker to finish accessing something before it can do its work.

I wonder if it will be useful to see what files are getting accessed and when while things run in band versus in parallel; that may indicate if there is a bottleneck associated with file access.

It could also just be that ESLint startup is slow and when executed in a single worker, the process is able to cache things like the imported files so that the startup is faster on subsequent runs. However, since it's so slow, it makes things MUCH slower when started up n times for n workers. That could be investigated by including some timing output for the main setup code - so we can see how long the first run takes versus subsequent runs within the same worker.

@Havunen
Copy link

Havunen commented Aug 30, 2023

Lately I have discovered that our application source code has multiple circular references between the file dependencies using import statements. It could be related to that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants