Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

For discussion: add in-process async mode to Puma plugin #115

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

bensheldon
Copy link
Contributor

First, let me apologize for dropping what is essentially a spike; I figured it was simplest to share some real code.

I wanted to see how difficult it would be to add an "async" mode to the puma plugin that would run the workers/dispatchers in the same process rather than forking. Not difficult at all! 馃馃徎

Working on this did expose a few questions that I had about what an ideal implementation would be:

Configuration

I added a new "mode" to the Puma plugin, and also overloaded the worker configuration to make "processes: 0" mean something new. I think they're somewhat redundant.

processes: 0 is not great (maybe someone wants to disable a worker but retain the configuration; this would make that more difficult). I went with 0 because I didn't want to mix integers and strings (e.g. processes: async) and...

In this implementation, it's not possible to mix both async and forking, and I think that could be nice (or maybe too complicated!). I could imagine async as general mode of SolidQueue rather than specific to the Puma plugin to allow mixing-and-matching async and forked workers through the CLI too.

Supervision

I made a new object called AsyncSupervisor which is overloading the concept of Supervisor a bit. It also made me think, if async and forked workers could be operated simultaneously there would need to be an object (a "Director"?) that managed both an AsyncSupervisor and Supervisor ... and that was the point where I figured I should just share where I'm at 馃槉

Slimming down workers

Lastly, I think this naive spike still leaves the Worker objects with some behavior that maybe isn't necessary (process registration, each having their own heartbeat) or possibly bad (e.g. setting the procline). It might be as simple as adding a few more if supervised? checks because I didn't see anything that strongly-strongly implied that Workers must run in a subprocess.

Lastly, I wanted to try this just to dig into SolidQueue, so please feel free to disregard/dispose of it if you already have completely different ideas for how to implement this functionality. I'm excited regardless 馃帀

@rosa
Copy link
Member

rosa commented Jan 9, 2024

Hey @bensheldon, thank you so much for this! I really want to have this feature so really appreciate you doing the work!

I could imagine async as general mode of SolidQueue rather than specific to the Puma plugin to allow mixing-and-matching async and forked workers through the CLI too.

Yeah, I'd like this too. I think I'd like to get rid of mode as it is now for the Supervisor (work, dispatch), leaving that to the configuration (if you have dispatchers there or workers, just work/dispatch based on that), and have that mean the same as for workers and dispatchers. Maybe it'd make sense to have async, fork for modes 馃 And maybe that'd make sense for workers and dispatchers too, instead of supervised, they could have forked or fork mode, and we could slim them down when they're not in fork mode. They'd be "supervised" in a sense when running in this new async mode you introduced. Then the configuration would know the difference between these and would ignore processes altogether when going in async mode, so we wouldn't need processes: 0.

I'll continue thinking about this, let me know if you have any thoughts about the above, and thanks again for this, I love it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants