Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

to stream or not to stream, that is the questions #14

Open
mcollina opened this issue Feb 17, 2021 · 4 comments
Open

to stream or not to stream, that is the questions #14

mcollina opened this issue Feb 17, 2021 · 4 comments

Comments

@mcollina
Copy link

Continuing from: piscinajs/piscina#108 (comment)

The GC can still move things to the old generation during synchronous calls, can't it? From what I know, minor collection cycles are triggered by growth in new space allocations, not by time. If streaming rendering vs synchronous rendering generate roughly the same amount of garbage plus or minus some promises and stream chunks, then they should promote roughly the same amount to the old space and cause roughly the same amount of slow major collections. What am I missing? Streaming rendering should also let the heap bloat less because chunks can be flushed and GC'd before the whole render is complete?

The above is not correct. The problem is that those objects lives longer, therefore they have a higher chance of getting moved to old space. This happens a lot. In most cases of synchronous renderToString(), everything gets collected before receiving any other data.

I also think you are still prioritizing lower server costs over TTFB which compromises the experience and I think if that's the case someone who cares should benchmark and prove that it really does lower server costs and we should trade away experience for it.

We are not in agreement. From my experience most of the React SSR lowers the end user experience under high pressure as the CPUs are gets extremely busy and the event loop overloads. A renderToString with a 100ms event loop block would easily become a 150-200ms event loop block using streams, e.g from to 10 to 5 req/s. Simply put, it increases the chances of receiving more requests than the server can handle. I'm basically worried about the 99.9% latency percentile.

@airhorns
Copy link
Collaborator

The above is not correct. The problem is that those objects lives longer, therefore they have a higher chance of getting moved to old space.

Which objects specifically? The React.Elements or the rendered strings? Because more garbage may be generated simultaneously, filling up the new space faster?

From my experience most of the React SSR lowers the end user experience under high pressure as the CPUs are gets extremely busy and the event loop overloads.

Sounds right to me, but isn't that why we want to use Piscina? To cap the number of simultaneous requests per event loop? With piscina in place, good TTFBs become achievable again I think.

@mcollina
Copy link
Author

Which objects specifically? The React.Elements or the rendered strings?

Some time has passed and I do not recall exactly which objects.. likely React.Elements.

Because more garbage may be generated simultaneously, filling up the new space faster?

More data is generated which is not collected in time.. and it ends up being promoted to old space.

@airhorns
Copy link
Collaborator

@mcollina do you still feel the same way for this? I have seen a few more React SSR frameworks fly by that support streaming for the same reasons I mentioned above which is a way better TTFB and user experience, especially with things like Suspense and Server Components coming down the pipes that allow you to send off more data to the client later after an initial render has been sent.

@mcollina
Copy link
Author

This consideration still holds:

A renderToString with a 100ms event loop block would easily become a 150-200ms event loop block using streams, e.g from to 10 to 5 req/s. Simply put, it increases the chances of receiving more requests than the server can handle. I'm basically worried about the 99.9% latency percentile.

The game changers are Server Components: it will significantly improve TTFB. At least we can trade some scalability for improved TTFB. If we add a circuit breaker on top (like under-pressure), this will be a really good combo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants