Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance regression traversing large arrays compared to other engines #1294

Open
maksimlya opened this issue Feb 3, 2024 · 4 comments
Open

Comments

@maksimlya
Copy link

Hi, I've noticed some parts of my application being slow, so while checking it, I noticed that running through array on hermes is significantly slower than on other platforms.

For this same test timings comparison:

NodeJS: 15 ms
Android WebView: 100 ms
Android RN JSC: 60 ms
Android RN Hermes: 1400 ms

Test is simply run over loop 10 min times and do something. Provided example with 'Test Convert' buton.

https://github.com/maksimlya/TestRNPerf

@maksimlya maksimlya added the bug Something isn't working label Feb 3, 2024
@tmikov
Copy link
Contributor

tmikov commented Feb 3, 2024

Hermes is an interpreter optimized for very fast startup and small binary size. You are comparing it against JSC, which is a type specializing JIT. At a steady state, given enough time to warm up, a JIT will always have a perf advantage. So, this is expected.

I extracted the benchmark from your code:

function testConvert1() {
  const startTime = Date.now();
  const byteArray = new Uint8Array(10000000);
  for (let n = 0; n < byteArray.length; n++) {
  }
  print("ExecutionTime = ", Date.now() - startTime);
}

This code, a tight array loop that does nothing, is particularly advantageous for a JIT. A JIT could legitimately optimize out the entire loop, bringing the time down to 0. By comparison, the JIT will have much harder time optimizing multiple small routines with allocations, etc.

If we modify the code to actually do something, and keep the typed array length in a variable to avoid fetching it all the time, we get this:

function testConvert2() {
  const startTime = Date.now();
  const byteArray = new Uint8Array(10000000);
  let sum = 0;
  for (let n = 0, len = byteArray.length; n < len; n++) {
    sum += byteArray[n];
  }
  print(sum, "ExecutionTime = ", Date.now() - startTime);
}

I ran this code with Hermes, v8 and JSC, with their JITs on and off:

- hermes          104 ms
- v8 --jitless    179 ms
- jsc --useJIT=0  116 ms
- v8               10 ms
- jsc              10 ms

As an interpreter, Hermes is currently on par in performance and often has an advantage.

With all that said, we realize that there are situations where more performance is needed, and Hermes is unable to serve them well. That's why we are working on Static Hermes, which will be much faster.

Running the same benchmark with Static Hermes currently takes 14ms, so we are very close and keep improving.

@tmikov tmikov added performance and removed bug Something isn't working labels Feb 3, 2024
@maksimlya
Copy link
Author

Ok, understood, thx for answer. There's also some other performance differences I've noticed, I posted it in react-native project but they said I should look into library's authors, altho I think issue is with some of the low-level api's.

I was looking at TextEncoder(from 'text-encoding' lib) and cheerio(jQuery) performace, and saw they perform much worse on React Native compared to webview/NodeJS and about as twice worse on Hermes compared to JSC.

TextEncoder in my example(on 100kb input) takes ~170 ms, while webview takes 1-2 ms (both Android and iPhone) and NodeJS takes 14 ms (I suppose it has to do something with WebAPI optimizations).

Another case, cheerio(JQuery) performance -when I try to perform some HTML manipulation via cheerio, I also saw big hit if compared to other platforms:

RN With Hermes(Redmi note 9): 3500 ms
RN JSC(Redmi note 9): 1700 ms
Android Webview(Redmi note 9): 150-200 ms
iOS Webview(iphone XR): 90 ms
NodeJS(Mac mini 2018): 150 ms

@maksimlya
Copy link
Author

I am now trying to workaround the TextEncoder via JSI/Golang and if it goes well then might attempt to do same with cheerio, altho that one might be bulky.

@tmikov
Copy link
Contributor

tmikov commented Feb 4, 2024

We are currently working on a native implementation of TextEncoder. So that will be much faster soon.

efstathiosntonas referenced this issue in facebook/react-native Feb 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants