Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cpuUsageThrottle #1460

Merged
merged 22 commits into from Sep 8, 2017
46 changes: 45 additions & 1 deletion docs/api/plugins.md
Expand Up @@ -260,8 +260,14 @@ event, e.g., `server.on('after', plugins.metrics());`:
The module includes the following plugins to be used with restify's `pre` event:
* `inflightRequestThrottle(options)` - limits the max number of inflight requests
* `options.limit` {Number} the maximum number of inflight requests the server will handle before returning an error
* `options.err` {Error} opts.err A restify error used as a response when the inflight request limit is exceeded
* `options.err` {Error} A restify error used as a response when the inflight request limit is exceeded
* `options.server` {Object} The restify server that this module will throttle
* `cpuUsageThrottle(options)`- Reject requests based on the server's current CPU usage
* `options.limit` - {Number} The point at which restify will begin rejecting a % of all requests at the front door.
* `options.max` - {Number} The point at which restify will reject 100% of all requests at the front door.
* `options.interval` - {Number} How frequently to recalculate the % of traffic to be rejecting.
* `options.halfLife` - {Number} How responsive your application will be to spikes in CPU usage, for more details read the cpuUsageThrottle section below.
* `options.err` - {Error} A restify error used as a response when the cpu usage limit is exceeded


## QueryParser
Expand Down Expand Up @@ -525,6 +531,44 @@ requests. It defaults to `503 ServiceUnavailableError`.
This plugin should be registered as early as possibly in the middleware stack
using `pre` to avoid performing unnecessary work.

## CPU Usage Throttling

```js
var restify = require('restify');

var server = restify.createServer();
const options = {
limit: .75,
max: 1,
interval: 250,
halfLife: 500,
}

server.pre(restify.plugins.cpuUsageThrottle(options));
```

cpuUsageThrottle is a middleware that rejects a variable number of requests (between 0% and 100%) based on a historical view of CPU utilization of a Node.js process. Essentially, this plugin allows you to define what constitutes a saturated Node.js process via CPU utilization and it will handle dropping a % of requests based on that definiton. This is useful when you would like to keep CPU bound tasks from piling up causing an increased per-request latency.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For bonus points might want to link to the ewma paper, but if you're already doing that in the ewma module, then maybe not. I leave that up to you :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in the EWMA module 😄 the 🐰 🕳️ is deep 😉


The algorithm asks you for a maximum CPU utilization rate, which it uses to determine at what point it should be rejecting 100% of traffic. For a normal Node.js service, this is 1 since Node is single threaded. It uses this, paired with a limit that you provide to determine the total % of traffic it should be rejecting. For example, if you specify a limit of .5 and a max of 1, and the current EWMA (next paragraph) value reads .75, this plugin will reject approximately 50% of all requests.

When looking at the process' CPU usage, this algorithm will take a load average over a user specified interval. For example, if given an interval of 250ms, this plugin will attempt to record the average CPU utilization over 250ms intervals. Due to contention for resources, the duration of each average may be wider or narrower than 250ms. To compensate for this, we use an exponentially weighted moving average. The EWMA algorithm is provided by the ewma module. The parameter for configuring the EWMA is halfLife. This value controls how quickly each load average measurment decays to half it's value when being represented in the current average. For example, if you have an interval of 250, and a halfLife of 250, you will take the previous ewma value multiplied by .5 and add it to the new CPU utilization average measurement multiplied by .5. The previous value and the new measurement would each represent 50% of the new value. A good way of thinking about the halfLife is in terms of how responsive this plugin will be to spikes in CPU utilization. The higher the halfLife, the longer CPU utilization will have to remain above your defined limit before this plugin begins rejecting requests and, converserly, the longer it will have to drop below your limit before the plugin begins accepting requests again. This is a knob you will want to with play when trying to determine the ideal value for your use case.

For a better understanding of the EWMA algorithn, refer to the documentation for the ewma module.

Params:
* `limit` - The point at which restify will begin rejecting a % of all requests at the front door. This value is a percentage. For example `0.8` === 80% average CPU utilization. Defaults to `0.75`.
* `max` - The point at which restify will reject 100% of all requests at the front door. This is used in conjunction with limit to determine what % of traffic restify needs to reject when attempting to bring the average load back within tolerable thresholds. Since Node.js is single threaded, the default for this is `1`. In some rare cases, a Node.js process can exceed 100% CPU usage and you will want to update this value.
* `interval` - How frequently we calculate the average CPU utilization. When we calculate an average CPU utilization, we calculate it over this interval, and this drives whether or not we should be shedding load. This can be thought of as a "resolution" where the lower this value, the higher the resolution our load average will be and the more frequently we will recalculate the % of traffic we should be shedding. This check is rather lightweight, while the default is 250ms, you should be able to decrease this value without seeing a significant impact to performance.
* `halfLife` - When we sample the CPU usage on an interval, we create a series of data points. We take these points and calculate a moving average. The halfLife indicates how quickly a point "decays" to half it's value in the moving average. The lower the halfLife, the more impact newer data points have on the average. If you want to be extremely responsive to spikes in CPU usage, set this to a lower value. If you want your process to put more emphasis on recent historical CPU usage when determininng whether it should shed load, set this to a higher value. The unit is in ms. Defaults to `250`.

You can also update the plugin during runtime using the `.update()` function. This function accepts the same `opts` object as a constructor.

```js
var plugin = restify.plugins.cpuUsageThrottle(options);
server.pre(plugin);

plugin.update({ limit: .4, halfLife: 5000 });
```

## Conditional Request Handler

Expand Down
248 changes: 248 additions & 0 deletions lib/plugins/cpuUsageThrottle.js
@@ -0,0 +1,248 @@
'use strict';

var assert = require('assert-plus');
var pidusage = require('pidusage');
var errors = require('restify-errors');
var EWMA = require('ewma');

/**
* cpuUsageThrottle
*
* cpuUsageThrottle is a middleware that rejects a variable number of requests
* (between 0% and 100%) based on a historical view of CPU utilization of a
* Node.js process. Essentially, this plugin allows you to define what
* constitutes a saturated Node.js process via CPU utilization and it will
* handle dropping a % of requests based on that definiton. This is useful when
* you would like to keep CPU bound tasks from piling up causing an increased
* per-request latency.
*
* The algorithm asks you for a maximum CPU utilization rate, which it uses to
* determine at what point it should be rejecting 100% of traffic. For a normal
* Node.js service, this is 1 since Node is single threaded. It uses this,
* paired with a limit that you provide to determine the total % of traffic it
* should be rejecting. For example, if you specify a limit of .5 and a max of
* 1, and the current EWMA (next paragraph) value reads .75, this plugin will
* reject approximately 50% of all requests.
*
* When looking at the process' CPU usage, this algorithm will take a load
* average over a user specified interval. example, if given an interval of
* 250ms, this plugin will attempt to record the average CPU utilization over
* 250ms intervals. Due to contention for resources, the duration of each
* average may be wider or narrower than 250ms. To compensate for this, we use
* an exponentially weighted moving average. The EWMA algorithm is provided by
* the ewma module. The parameter for configuring the EWMA is halfLife. This
* value controls how quickly each load average measurment decays to half it's
* value when being represented in the current average. For example, if you
* have an interval of 250, and a halfLife of 250, you will take the previous
* ewma value multiplied by 0.5 and add it to the new CPU utilization average
* measurement multiplied by 0.5. The previous value and the new measurement
* would each represent 50% of the new value. A good way of thinking about the
* halfLife is in terms of how responsive this plugin will be to spikes in CPU
* utilization. The higher the halfLife, the longer CPU utilization will have
* to remain above your defined limit before this plugin begins rejecting
* requests and, converserly, the longer it will have to drop below your limit
* before the plugin begins accepting requests again. This is a knob you will
* want to with play when trying to determine the ideal value for your use
* case.
*
* For a better understanding of the EWMA algorithn, refer to the documentation
* for the ewma module.
*
* @param {Object} opts Configure this plugin.
* @param {Number} [opts.limit] The point at which restify will begin rejecting
* a % of all requests at the front door. This value is a percentage.
* For example 0.8 === 80% average CPU utilization. Defaults to 0.75.
* @param {Number} [opts.max] The point at which restify will reject 100% of all
* requests at the front door. This is used in conjunction with limit to
* determine what % of traffic restify needs to reject when attempting to
* bring the average load back to the user requested values. Since Node.js is
* single threaded, the default for this is 1. In some rare cases, a Node.js
* process can exceed 100% CPU usage and you will want to update this value.
* @param {Number} [opts.interval] How frequently we calculate the average CPU
* utilization. When we calculate an average CPU utilization, we calculate it
* over this interval, and this drives whether or not we should be shedding
* load. This can be thought of as a "resolution" where the lower this value,
* the higher the resolution our load average will be and the more frequently
* we will recalculate the % of traffic we should be shedding. This check
* is rather lightweight, while the default is 250ms, you should be able to
* decrease this value without seeing a significant impact to performance.
* @param {Number} [opts.halfLife] When we sample the CPU usage on an interval,
* we create a series of data points. We take these points and calculate a
* moving average. The halfLife indicates how quickly a point "decays" to
* half it's value in the moving average. The lower the halfLife, the more
* impact newer data points have on the average. If you want to be extremely
* responsive to spikes in CPU usage, set this to a lower value. If you want
* your process to put more emphasis on recent historical CPU usage when
* determininng whether it should shed load, set this to a higher value. The
* unit is in ms. Defaults to 250.
* @returns {Function} middleware to be registered on server.pre
*/
function cpuUsageThrottle (opts) {

// Scrub input and populate our configuration
assert.object(opts, 'opts');
assert.optionalNumber(opts.limit, 'opts.limit');
assert.optionalNumber(opts.max, 'opts.max');
assert.optionalNumber(opts.interval, 'opts.interval');
assert.optionalNumber(opts.halfLife, 'opts.halfLife');

var self = {};
self._limit = (typeof opts.limit === 'number') ?
opts.limit : 0.75;
self._max = opts.max || 1;
self._interval = opts.interval || 250;
self._halfLife = (typeof opts.halfLife === 'number') ? opts.halfLife : 250;
assert.ok(self._max > self._limit, 'limit must be less than max');

self._ewma = new EWMA(self._halfLife);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0_0 why am I not using this?


// self._reject represents the % of traffic that we should reject at the
// current point in time based on how much over our limit we are. This is
// updated on an interval by updateReject().
self._reject = 0;

// self._timeout keeps track of the current handle for the setTimeout we
// use to gather CPU load averages, this allows us to cancel the timeout
// when shutting down restify.
self._timeout = null;
// self._timeoutDelta represents the amount of time between when we _should_
// have run updateReject and the actual time it was invoked. This allows
// us to monitor lag caused by both the event loop and pidusage.stat
self._timeoutDelta = 0;
self._timeoutStart = Date.now();

// updateReject should be called on an interval, it checks the average CPU
// usage between two invocations of updateReject.
function updateReject() {
pidusage.stat(process.pid, function (e, stat) {
// If we were unable to get cpu usage, don't make any new decisions.
if (!stat ||
typeof stat.cpu !== 'number' ||
Number.isNaN(stat.cpu)) {
return;
}

// Divide by 100 to match Linux's `top` format
self._ewma.insert(stat.cpu / 100);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens when you insert NaN?

self._cpu = self._ewma.value();

// Update reject with the % of traffic we should be rejecting. This
// is safe since max > limit so the denominator can never be 0. If
// the current cpu usage is less that the limit, _reject will be
// negative and we will never shed load
self._reject =
(self._cpu - self._limit) / (self._max - self._limit);
self._timeout = setTimeout(updateReject, self._interval);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Under heavy load updateReject() might execute later than you'd like it to - is it valuable to incorporate that delta into the algo?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, EWMA accounts for this :-)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is the reason for the crazy magic maths here: https://github.com/ReactiveSocket/ewma/blob/master/index.js#L55

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even tho ewma accounts for this in the algorithm, the interval is a parameter configured by the user, and we should strive to stick to said interval. I think it still makes sense to keep track of the delta here, since changes to the interval will affect the throttling algorithm.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even tho ewma accounts for this in the algorithm, the interval is a parameter configured by the user, and we should strive to stick to said interval.

Agreed. That is what I'm trying to accomplish with this code, though there may be a better way. Since pid.stat is async and it's performance is not necessarily bound to process saturation, its possible the first call to it could take a few minutes to return. With setInterval we have the potential to queue a whole bunch of pid.stat calls up while that first blocks. Then it ends up being a race to burn through that queue. This tries to ensure we call pid.stat exactly once at a time and that the time between it returning back to us and us calling it again is as close to the interval the user requested as possible.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about just exposing the delta for now then (through EE or similar) and we can keep track of it as we first deploy? As we gather data it should inform us just how much of an impact there will be.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In other words, no need to consume it in a meaningful way now, but at least we have some visibility into what's going on. Although, now that I think about it - would something like this already be captured in the event loop metrics?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I think there's been some confusion. I am suggesting we keep track of how much time has elapsed so that we can update interval appropriately. e.g. if the interval is 500ms, but it took us 300ms to go through this current interval, we should then set timeout to 200ms, so that we're ensuring we're firing the interval every 500ms. Otherwise in this example we would be running the next interval 800ms since the last interval.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There could be latency introduced by the file descriptor read on Linux or spinning up the subprocess on windows (inside or pidusage.stat). I'll export it as a value off of state

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also adding it to the error context when we shed load

var now = Date.now();
self._timeoutDelta = now - self._timeoutStart;
self._timeoutStart = now;
});
}

// Kick off updating our _reject value
updateReject();

function onRequest (req, res, next) {
// Check to see if this request gets rejected. Since, in updateReject,
// we calculate a percentage of traffic we are planning to reject, we
// can use Math.random() (which picks from a uniform distribution in
// [0,1)) to give us a `self._reject`% chance of dropping any given
// request. This is a stateless was to drop approximatly `self._reject`%
// of traffic.
var probabilityDraw = Math.random();

if (probabilityDraw >= self._reject) {
return next(); // Don't reject this request
}

var err = new errors.ServiceUnavailableError({
context: {
plugin: 'cpuUsageThrottle',
cpuUsage: self._cpu,
limit: self._limit,
max: self._max,
reject: self._reject,
halfLife: self._halfLife,
interval: self._interval,
probabilityDraw: probabilityDraw,
lag: self._timeoutDelta
}
});

return next(err);
}

// Allow the app to clear the timeout for this plugin if necessary, without
// this we would never be able to clear the event loop when letting Node
// shut down gracefully
function close () {
clearTimeout(self._timeout);
}
onRequest.close = close;

// Expose internal plugin state for introspection
Object.defineProperty(onRequest, 'state', {
get: function () {
// We intentionally do not expose ewma since we don't want the user
// to be able to update it's configuration, the current state of
// ewma is represented in self._cpu
return {
limit: self._limit,
max: self._max,
interval: self._interval,
halfLife: self._halfLife,
cpuUsage: self._cpu,
reject: self._reject,
lag: self._timeoutDelta
};
}
});

/**
* cpuUsageThrottle.update
*
* Allow the plugin's configuration to be updated during runtime.
*
* @param {Object} newOpts The opts object for reconfiguring this plugin,
* it follows the same format as the constructor for this plugin.
* @returns {undefined}
*/
onRequest.update = function update(newOpts) {
assert.object(newOpts, 'newOpts');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this function you might actually want to log/emit that we've updated the parameters, to ease debugging later on.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't have access to the logger here :-\

I can start caching the logger on the first request though.

assert.optionalNumber(newOpts.limit, 'newOpts.limit');
assert.optionalNumber(newOpts.max, 'newOpts.max');
assert.optionalNumber(newOpts.interval, 'newOpts.interval');
assert.optionalNumber(newOpts.halfLife, 'newOpts.halfLife');

if (newOpts.limit !== undefined) {
self._limit = newOpts.limit;
}

if (newOpts.max !== undefined) {
self._max = newOpts.max;
}

if (newOpts.interval !== undefined) {
self._interval = newOpts.interval;
}

if (newOpts.halfLife !== undefined) {
self._halfLife = newOpts.halfLife;
// update our ewma with the new halfLife, we use the previous known
// state as the initial state for our new halfLife in liue of
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/liue/lieu

// having access to true historical data.
self._ewma = new EWMA(self._halfLife, self._cpu);
}

// Ensure new values are still valid
assert.ok(self._max > self._limit, 'limit must be less than max');

// Update _reject with the new settings
self._reject =
(self._cpu - self._limit) / (self._max - self._limit);
};

return onRequest;
}

module.exports = cpuUsageThrottle;
1 change: 1 addition & 0 deletions lib/plugins/index.js
Expand Up @@ -11,6 +11,7 @@ module.exports = {
bodyParser: require('./bodyParser'),
bodyReader: require('./bodyReader'),
conditionalRequest: require('./conditionalRequest'),
cpuUsageThrottle: require('./cpuUsageThrottle.js'),
dateParser: require('./date'),
fullResponse: require('./fullResponse'),
gzipResponse: require('./gzip'),
Expand Down
3 changes: 3 additions & 0 deletions package.json
Expand Up @@ -97,13 +97,15 @@
"clone-regexp": "^1.0.0",
"csv": "^1.1.0",
"escape-regexp-component": "^1.0.2",
"ewma": "^2.0.1",
"formidable": "^1.0.17",
"http-signature": "^1.0.0",
"lodash": "^4.17.4",
"lru-cache": "^4.0.1",
"mime": "^1.4.0",
"negotiator": "^0.6.1",
"once": "^1.3.0",
"pidusage": "^1.1.6",
"qs": "^6.2.1",
"restify-errors": "^5.0.0",
"semver": "^5.0.1",
Expand All @@ -128,6 +130,7 @@
"mocha": "^3.2.0",
"nodeunit": "^0.11.0",
"nsp": "^2.2.0",
"proxyquire": "^1.8.0",
"restify-clients": "^1.2.1",
"rimraf": "^2.4.3",
"validator": "^7.0.0",
Expand Down