Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sequential network requests are simulated as parallel and the first rtt is ignored #15897

Open
mbehzad opened this issue Mar 27, 2024 · 0 comments

Comments

@mbehzad
Copy link

mbehzad commented Mar 27, 2024

Summary
first RTT for requests that can only happen one after another are not taken into account.

Hi,

imagine this page load where the html page has a single script module, which renders the LCP image. But this script has a dependency (in the form of an esm module import import "./script-2.js" ) to script-2. script-2 loads script-3, 3 loads 4 ... til script-9 imports script-10. And once all dependencies are imported then the LCP image can be rendered:
grafik

For the LCP timing I would assume it will take the browser 4-5 round trips to establish the TCP connection and get the initial html, 1 rtt per script file (the browser doesn't know it needs e.g. script-4 before it has loaded script-3 and can't download them in parallel) and finally a last rtt for the image. plus each time the server processing and the actual content download, but for simplicity assume that is zero. This will give us an LCP of a minimum (for rtt=150ms) 2,250 ms.
But Lighthouse reports LCP of 0.8s PSI result (LH 11.5.0, Chrome 122) for this test page. WPT Test for comparison reports the LCP around 3s.

I think the issue is that in the Lanterns network throttling simulation, it assumes resources which are sharing the same connection are always requested in parallel, and don't bring any addition RTTs for their TTFB (if (this._warmed && this._h2) timeToFirstByte = 0; tcp-connection.js#L152). This condition might need to be extended by checking if the node's networkRequestTime or rendererStartTime was not after the networkEndTime form the previous node from the same TCP connection in the original run (i.e. it was requested with or during another request).

Same applies to the extra capacity of the response to one request that can be filled up with the content of the next request (extraBytesDownloaded = ... totalBytesDownloaded - bytesToDownload tcp-connection.js#L178) which would only be true when the server is already aware of the next request which isn't here the case.

This scenario is for most websites out there irrelevant, but recently no-build frontends and relaying on native es modules is gaining popularity, largely based on its good lighthouse score (see DHH's post for example), which makes it important to have an accurate LH score for such setups.

If any of these makes sense, let me know and I'll try to open a pull request

Cheers
Mehran

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants