Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPU-intensive handling #23

Open
cliffano opened this issue Nov 24, 2013 · 6 comments
Open

CPU-intensive handling #23

cliffano opened this issue Nov 24, 2013 · 6 comments

Comments

@cliffano
Copy link

This is probably borderline question and feature request.

How will KrakenJS handle the situation where the application logic is CPU intensive? Where requests keep getting accepted, the event loop grows, and eventually the process runs out of memory.

Is there any plan to add something like toobusy module to identify this situation so Kraken would no longer accept more requests when the process is already 'busy'.

Or would this be considered an OPS problem that shouldn't be handled by Kraken?
Edit: or is it the responsibility of the application to move the CPU-intensive task to a separate process outside of Kraken?

@lensam69
Copy link
Contributor

Hi @cliffano,
Kraken is just a layer on top of express.
I don't think this is something that should be kraken's responsibility, but discussion is welcome.

@cliffano
Copy link
Author

How about having node-toobusy as something that Kraken can optionally setup, just like it does with lusca security stuffs? This won't be something that Kraken needs to implement.
(node-toobusy readme provides a sample usage with express)

I'm seeing this as a common need across node.js apps since all risks the same problem under extreme load.
Without Kraken, all apps written on top of Kraken will have to wire node-toobusy themselves.

I understand that what goes into Kraken suite is not going to satisfy everyone.
What would be the rule of thumb on deciding whether feature foobar would be something that can go as part of Kraken suite, or whether it should be wired by the application itself?

@totherik
Copy link
Member

tl;dr 😬

Thanks, @cliffano! Yeah, internally we've been having some discussions about this for a while now. node-toobusy is a great module and apps would be well served to use it, but I can think of a few areas of concern that make me reluctant to add it here.

First, the role of something like toobusy seems like it could be different to different applications. For some apps it might suffice to treat event loop saturation as an error, merely returning a 503 and moving on. For others, the role of this instrumentation may be more focused toward ops concerns, like you mentioned above. Which leads me to configuration.

The power in toobusy is in its simplicity. The flexibility it offers as to what to do when your server is indeed too busy complicates configuration at this abstraction layer. With modules like lusca, you just pass in static config values, but we would need an API that allows for configuration of behavior, e.g. a handler describing what to do in an ultra general way such that it's valuable for myriad use-cases. At that point the config process would probably look much like middleware registration (just like vanilla toobusy usage), and obviate the need for us to be in the middle. If you have thoughts around what an unintrusive API might look like, I'd love to collaborate on this.

Finally, and this is a big one, there are other interesting tools out there to solve this problem, and I hesitate to lock developers into a solution. Yahoo has a pair of tools called process-watcher and monitr that handle this in a slightly different way, so at the very least this implementation would need super-smart defaults or a dead easy way to enable/disable.

Regarding your 'rule of thumb' question, we've mostly been attempting to walk the fine line between providing just enough, but not prescribing too much and exposing configuration for the space between. toobusy may fit that role perfectly, but if in practice our API is no simpler than a developer just using toobusy directly, I'd err on the side of maintaining less code. Also, just as with cluster, (which has been intentionally omitted), it does walk that line of "is it an ops or app concern?" and I, personally, don't want clear ops concerns leaking into this module.

@cliffano
Copy link
Author

cliffano commented Dec 4, 2013

Thanks @totherik for the detailed feedback. I'm sorry for the late reply, I was offline for several days last week.
I see what you're saying with what Kraken is, and also with the fine line between just enough vs prescribing too much.

Kraken definitely provides the foundation of what most web apps will need, but at the same time various organisations will have various needs, and using toobusy/process-watcher/monitr could be one of them.
Using toobusy is simple enough, but other things might not be as simple and might require a number of lines to repeat on each and every web app, I thought it would be nice if the framework can simplify this into a simple config.

Just an idea: one solution is to create a module which could be composed of various custom tools and hooked up to Kraken's application lifecycle. This would allow an organisation to bundle the stuffs that are "too much for Kraken" into this module, and leave Kraken with the "just enough" parts.
Does this sound similar to what you're thinking?

Re unintrusive API, I think Kraken's application lifecycle is a good start for extension points.

Re ops stuff. I know this is crazy but there are still a number of organisations with a tall fence between dev and ops. It is sometimes necessary for dev team to bundle ops-related stuffs inside the app due to problems on ops team, e.g. lack of funding, ops team can't add foobar to their setup. Sad but true.

@totherik
Copy link
Member

totherik commented Dec 9, 2013

Hey @cliffano I think we can keep this one around. The more I think about it the more I think we could provide an unintrusive opt-in config settings for toobusy-style behavior with basic defaults. Essentially, similar to what hapi provides in the way of setting thresholds for various metrics, above which we'll send 503.

Secondly, I like the idea of providing additional bolt-on functionality and in some ways it may be more straightforward once issue #50 gets sorted out.

@totherik totherik reopened this Dec 9, 2013
@jeffharrell
Copy link
Member

Renaming to track better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants