Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: worker pooling #27

Open
dbkaplun opened this issue May 9, 2018 · 2 comments
Open

Feature: worker pooling #27

dbkaplun opened this issue May 9, 2018 · 2 comments

Comments

@dbkaplun
Copy link

dbkaplun commented May 9, 2018

Hello,

It would be great if I could create a pool of workers. That way, parallelizing hot paths could be completely abstracted using workerize.

Considerations:

  • To implement worker pooling, a worker may have to know when another worker is busy, so it can handle take over handling new requests until becomes busy itself
  • Since workers independently manage memory, they would all have to run the same code, and there would have to be a way to run a function on all workers simultaneously in order to share state

It will be interesting to see what we will need to do for this. Would you accept a PR?

Thanks for considering!

@TimvanScherpenzeel
Copy link

@developit recently added this gist that explores the idea of a worker pool: https://gist.github.com/developit/65a2212731f6b00a8aaa55d70c594f5c

I was wondering @developit, is this something you are planning to add to the repo at some stage or should it be seen more like a rough sketch or like a standalone extension of the library?

It looks like an attempt to solve the TODO's mentioned in workerize:

/** TODO:
* - pooling (+ load balancing by tracking # of open calls)
* - queueing (worth it? sortof free via postMessage already)

@jzyzxx
Copy link

jzyzxx commented Jun 24, 2021

Hello,

It would be great if I could create a pool of workers. That way, parallelizing hot paths could be completely abstracted using workerize.

Considerations:

  • To implement worker pooling, a worker may have to know when another worker is busy, so it can handle take over handling new requests until becomes busy itself
  • Since workers independently manage memory, they would all have to run the same code, and there would have to be a way to run a function on all workers simultaneously in order to share state

It will be interesting to see what we will need to do for this. Would you accept a PR?

Thanks for considering!

Good idea!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants