Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: impact of detectHostComponentNames #1579

Open
mdjastrzebski opened this issue Mar 28, 2024 · 5 comments
Open

RFC: impact of detectHostComponentNames #1579

mdjastrzebski opened this issue Mar 28, 2024 · 5 comments
Assignees

Comments

@mdjastrzebski
Copy link
Member

mdjastrzebski commented Mar 28, 2024

Summary

I've been curious what is the impact of automatically detecting host component names. Due to the lack of better large test suite I've taken RNTL own tests.

Here is my process:

  1. Disable tests that fail with hardcode hostComponentNames with test.skip().
  2. Measure execution time with time yarn test in relevant scenarios
  3. Add console.log() to detectHostComponentNames() and count execution count (separate run from measuring time).

Note: we also have beforeEach(() => resetToDefaults()), so I measured it with and without it.

  • Without detectHostComponentNames: 39,4 s / 0 runs
  • With detectHostComponentNames + without beforeEach(() => resetToDefaults()): 44,6 s / 63 runs => 13% overhead
  • With detectHostComponentNames + with beforeEach(() => resetToDefaults()): 47,1 s / 585 runs => 20% overhead

Seems like Jet by default clears the runtime state per each file. So we have detectHostComponentNames() being called for each file.

Here are my questions:

  • if if my methodology is correct?
  • did you observe the same/different effect on your own test-base? (here is a link previous perf impact issue of detect...)
  • is that number significant enough to warrant a research into alternative approaches?
@thymikee
Copy link
Member

It's expected from Jest to recreate the whole environment for each test suite (file).

@pierrezimmermannbam
Copy link
Collaborator

You're running the whole RNTL test suite? On my machine it takes about 7s, does it take only 39,4ms on yours? From the tests I'v run on my machine (M3 pro) the difference is less significant, it takes on average 7.45s to run the test suite without the detect and 7.7s

Other than that yeah the approach is correct. One thing to note though is that I would expect the difference to be even less significant on a real project where tests per file are likely to be more heavy (I'll try to test it on my current project too). I don't know if the perf impact is high enough to look into alternative approaches, I guess if we find a good one it could be worth

@mdjastrzebski
Copy link
Member Author

@pierrezimmermannbam The unit was to be s not ms (sorry for confusion, I've run it on my MBP with Intel)

@pierrezimmermannbam
Copy link
Collaborator

Oh I see, that makes sense. Strange that the difference in percentage would be so different on our machines. Also I'm just realizing this shows that calling resetToDefaults is useless no? Why were we doing this initially?

@mdjastrzebski
Copy link
Member Author

Re %, I've also figured out that is swapped the 13% and 20% (the absolute numbers where correct). Now it should be good in the issue desc.

Re resetToDefaults. I guess that the justification is to make sure that each test is indeed separated from the others (in the same file; as normally tests are separated between files). We could either keep it or remove it, my worry was rather about our users than us.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants