Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

check types in a separate thread #113

Open
KaelWD opened this issue Sep 7, 2018 · 3 comments
Open

check types in a separate thread #113

KaelWD opened this issue Sep 7, 2018 · 3 comments
Labels
help wanted kind: feature New feature or request kind: optimization Performance, space, size, etc improvement

Comments

@KaelWD
Copy link

KaelWD commented Sep 7, 2018

What happens and why it is wrong

rollup --watch takes 14sec to rebuild with check: true, and only 3sec with check: false. I propose creating a separate node process just for type checking, similar to fork-ts-checker-webpack-plugin. This would allow us to have faster builds without losing realtime type errors.

Versions

  • typescript: 3.0.1
  • rollup: 0.65.0
  • rollup-plugin-typescript2: 0.17.0
@ezolenko
Copy link
Owner

This is a good idea, another speed up might be had by rewriting the plugin to return promises (now that rollup expects that from plugins)

@agilgur5
Copy link
Collaborator

agilgur5 commented Sep 11, 2022

Thought I should add some notes here around some investigations and concerns I had over the past few months.

Performance is not necessarily better when threaded/forked

This is the first caveat I thought would be important to mention here. In Webpack-land, various TS loaders have had all sorts of performance issues and tried different solutions, and some ended up being de-optimizations.

With things like caching and forking/threading, it is not a straightforward optimization. The answer of does it increase performance is "it depends", specifically on the characteristics of the machine, the project, and the bottlenecks.

Caching

Caching often uses more memory and/or uses the FS, so there is I/O, memory, and storage trade-offs. In modern computers, I/O is often the bottleneck, with disk being significantly slower (especially if still on HDD). So caching can actually make things slower for certain machines and smaller projects which do not need to do much compute. For larger projects, this trade-off might be worthwhile. But larger projects may also require more memory as well so that can bottleneck as well. "Storage is cheap" nowadays, so that one is rarely a concern.

rpt2 actually does have a cache built-in and enabled by default. I've actually mentioned potentially doing some heuristics to maybe disable by default in #362. And there's still a few optimizations to be had in the cache too.

Forking/Threading

Threading

As a baseline, Node doesn't have fully capable threads. worker_threads are ~relatively recent in Node, heavily influenced by the browser Web Workers API.
In particular, Node's memory sharing between threads is still very low-level, relying on SharedArrayBuffers as the main primitive.
(SharedArrayBuffers are basically manual shared memory mapping -- while this can be quite powerful, it is not very ergonomic. A higher-level abstraction can be built on top of this low-level interface, but I haven't seen one as of yet).

Due to this limitation, we still can't share much memory between threads in Node. In rpt2's case, we'd want to share, at the very least, the TS LanguageServer and possibly some other objects. There isn't really a way of sharing generic objects in Node yet, so we can't share those objects.
(postMessage can use more types of objects, but they are cloned between threads, so this is not shared)

So whether we were to use worker_threads or separate processes, we wouldn't really be able to share memory. So, at this time, it would make sense simplify the theoretical model to no shared memory, which effectively means different processes (either process-like threads, or actual processes), and move forward with an assumption of forking.

Forking Processes

The caveat with forking, as mentioned above, is that you're going to use more memory. Potentially significantly more, as each process has to duplicate some memory.

That also means that we may need to use more CPU as well, because not only do we have to construct and fill more data structures, but we may also need to re-parse various TS source code multiple times since we can't pass around the parsed objects between processes. And message-passing adds overhead too.
With modern multi-CPU architectures, even if this is significantly duplicative, the trade-off could be worthwhile if the parallelization decreases the absolute time (i.e. even though the sum of duplicative processing would be higher than a single process, the max time per process could be lower than in a single process).

So if with that understanding of various trade-offs as our baseline, we can dive into more practical specifics.

Prior Art in Webpack

We can look at prior work in Webpack-land as examples.

The best example of forking actually causing a de-optimization would be the history of awesome-typescript-loader (ATS), which has now been completely archived for ~2 years now (and unmaintained a bit longer). There's a lot of great details on performance in ATS, enough so that it made it into the top of the README.

Some specific references: s-panferov/awesome-typescript-loader#497, s-panferov/awesome-typescript-loader#649, plus others.
I've probably read some more generic ones in thread-loader and happypack too. Don't remember the specific issues off the top of my head, but can add them here if I do.

So forking has a checkered performance history in Webpack, probably due to the above theoretical trade-offs.
And, even with fork-ts-checker-webpack-plugin's popularity, optimization and tuning TS is still a general problem for anything in the TS ecosystem: TypeStrong/fork-ts-checker-webpack-plugin#684.

TS's official Performance docs even mention some optimizations and these caveats in other tooling as well.

Potential Next Steps

With that being said, it might be good to try some benchmark or add an experimental forking mode to rpt2.

But, due to the above trade-offs and the possibility that this is actually a de-optimization, this is most certainly low priority -- it may not even be worthwhile to pursue something so experimental that may be thrown away.

Something I mentioned in #148 (comment) that could be a very relevant optimization for some users and/or useful for this feature would be the introduction of emitDeclarationOnly support in 0.33.0 / #366.
With emitDeclarationOnly, one could get a performance increase by, say, using rollup-plugin-esbuild to do TS -> JS compilation, while rpt2 does type-checking and/or declaration generation.

Similarly, we might be able to utilize the emitDeclarationOnly support to simplify creating a new process -- i.e. the secondary type-checking process would run with emitDeclarationOnly: true, while the primary one would do TS -> JS. In this fashion, it might be doable to just instantiate rpt2 twice, once in a second process with slightly different args. This would be similar to Webpack-land using ts-loader with transpileOnly: true + fork-ts-checker-webpack-plugin -- rpt2 with check: false + rpt2 with emitDeclarationOnly: true -- or something like that.

There would still be more kinks to work out for sure, but that might simplify the work required a good bit if doable, as we wouldn't have to add nearly as much internal code to handle that.

@agilgur5
Copy link
Collaborator

Wanted to note here that there is an older, alpha, unmaintained plugin: rollup-plugin-fork-ts-checker. This plugin was designed to work with rpt2 and actually uses the Webpack plugin under-the-hood. The author is also the same author as vite-plugin-checker.

I have not tested to see if it works / still works, but it's ~around the same size as rpt2 in LoC plus significantly heavier dependencies, which may serve as a testament to the complexity of this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted kind: feature New feature or request kind: optimization Performance, space, size, etc improvement
Projects
None yet
Development

No branches or pull requests

3 participants